Summary
Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction.
Announcements
Parting Question
Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
- Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
- Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"
- Introduction
- How did you get involved in machine learning?
- Can you start by unpacking the idea of "human-like" AI?
- How does that contrast with the conception of "AGI"?
- The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?
- The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models?
- What are the opportunities and limitations of causal modeling techniques for generalized AI models?
- As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?
- What are the practical/architectural methods necessary to build more cognitive AI systems?
- How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?
- What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?
- When is cognitive AI the wrong choice?
- What do you have planned for the future of cognitive AI applications at Aigo?
Parting Question
- From your perspective, what is the biggest barrier to adoption of machine learning today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
- Aigo.ai
- Artificial General Intelligence
- Cognitive AI
- Knowledge Graph
- Causal Modeling
- Bayesian Statistics
- Thinking Fast & Slow by Daniel Kahneman (affiliate link)
- Agent-Based Modeling
- Reinforcement Learning
- DARPA 3 Waves of AI presentation
- Why Don't We Have AGI Yet? whitepaper
- Concepts Is All You Need Whitepaper
- Hellen Keller
- Stephen Hawking
[00:00:11]
Unknown:
Hello, and welcome to the Data Engineering Podcast, the show about modern data management. A new approach to building and running data platforms and data pipelines. It is an open source, cloud native orchestrator for the whole development life cycle with integrated lineage and observability, a declarative programming model, and best in class testability. Your team can get up and running in minutes thanks to DAXTER Cloud, an enterprise class hosted solution that offers serverless and hybrid deployments, enhanced security, and on demand ephemeral test deployments. Go to data engineering podcast.com/daxter today to get started, and your first 30 days are free. Data lakes are notoriously complex.
For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte scale SQL analytics fast at a fraction of the cost of traditional methods so that you can meet all of your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and DoorDash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first class support for Apache Iceberg, Delta Lake and Hoody, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com slash starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey, and today I'm interviewing Peter Voss about what is involved in making your AI applications more human. So, Peter, can you start by introducing yourself?
[00:01:49] Unknown:
Yes. I'm, thanks for having me. It's I'm Peter Voss. I'm CEO and chief scientist of aigo.ai. And my mission is really to bring humanlikehumanlevelintelligence to the world.
[00:02:04] Unknown:
And do you remember how you first got involved in machine learning?
[00:02:08] Unknown:
Yes. Well, actually, machine learning is just sort of a sideline. I guess today most people, when they think about AI or what they know about AI, is only about machine learning. But machine learning is only a small part of the field of artificial intelligence. So how I got into AI is, actually, I started out as an electronics engineer, started my own company. Then I fell in love with software, and my company turned into a software company, which ended up being quite successful. We went from the garage to 400 people and did an IPO.
It's when I exited that company I had enough sort of time on my hands to say, what big project do I want to tackle? And what occurred to me is that software really is very dumb. You know, if the programmer doesn't think of something, it'll just crash or give you an error message. It doesn't have common sense. So how can we make software intelligent? How can we have software that actually has common sense, that can learn, and that can reason? And that's what started me on on the journey of trying to figure out how to build real artificial intelligence.
And that was some 20 plus years ago.
[00:03:21] Unknown:
You mentioned that you are focused on trying to make AI more human like or human level. And I'm wondering if you could just start by unpacking what that even means for an AI to be humanlike.
[00:03:33] Unknown:
Yes. So if we go back to when the term AI was coined some 60 years ago, what they had in mind was to have machines that can think, reason, and learn the way humans do, the way we do. And they actually thought they could crack this problem in a few years. Well, of course, it turned out to be much, much harder than that. So what happened in the field of AI is that it really became narrow AI, solving 1 problem at a time. So a good example of that would be Deep Blue, the IBM's world champion chess software, you know, whether it's container optimization or some medical diagnosis or whatever it might be. They narrow problems that are solved. Even the Go, you know, AlphaGo, it's just that 1 problem that is being solved.
And the kicker here is that it's not actually the software that has the intelligence. It's the programmer who puts together the particular software to solve that particular problem. So when I started studying, AI and I started really I spent a lot of time. I spent about 5 years studying all different aspects of intelligence and AI and and and so on. And what I realized is that there are core requirements of intelligence, and those are basically to be able to learn and reason interactively. So in around about 2, 000, I got together with some other people who wanted to capture the original dream of artificial intelligence to build thinking machines.
And we actually wrote a book, and the title of the book was artificial general intelligence or AGI, which has now become a very common term. So we coined the term. 3 of us actually coined the term, AGI in 2, 000 and 1. And the difference between AGI and, And the difference between AGI and conventional AI is that it can learn interactively, and it can learn in a way that conceptualizes things. So let me give you an example here. You know, if you have if you're a scientist or researcher or just somebody learning stuff, you might read a book or you might read an article, and you you you integrate every sentence that you read. Every sentence that you read either triggers something and say, Oh, okay. I already know this, or this contradicts something I know, or this makes me think about something, or I should look up some further details.
So we actually digest the information that we read sort of sentence by sentence, paragraph by paragraph. And that that updates the model that we have of the world. And that is what real intelligence is is is about, what we call cognitive AI. Whereas what everybody's doing today is generative statistical AI, which just blindly ingests all of the information without trying to make sense of it. So that's a big difference. So AGI or cognitive AI is really to get to build thinking machines, machines that think conceptually.
[00:06:46] Unknown:
That aspect of updating the model continuously, having some linkages between concepts, brings up a lot of other parallel fields to AI, and also some of the, statistical approaches to machine learning. In particular, it brings to mind Bayesian statistics, which also brings up the idea of causal modeling. And then also from that concept linkage perspective, it brings up the idea of knowledge graphs, which are also being used to supplement things like generative AIs or other machine learning models. And I'm wondering if you can give me some idea of some of the overlaps and some of the ways that, those concepts are disjoint from Cognitive AI, the way that you think about it.
[00:07:29] Unknown:
Right. So I think the most fundamental problem with, current generative systems is that, in fact, the word GPT already gives us a very good clue. G generative. So it makes up stuff basically from, you know, all of the mass knowledge that it's acquired. But the knowledge it's acquired is good, bad, and ugly. It hasn't, you know, it hasn't been integrated, sifted, hasn't been validated. It's just it's just in there. So you basically generate stuff, you know, next next token, essentially, from there. And, of course, that's amazingly powerful, but that's what it does. Now, g so the p is pretrained, which inherently tells you it's not really gonna learn anything when you use it.
So, you know, it's essentially a read only system. Now, of course, there are different tweaks that people try to use to overcome it, but the bottom line is the model itself does not change. And the T is it's transformer based, which basically means it's a back propagation type system, which has to be batch trained. So as long as you're using transformers, you are locked into that approach. Now, cognitive AI, and the way we approach it is we do have a dynamic knowledge graph that gets updated in real time. So it can't be a separate external database. It has to be the actual model itself, has to be this dynamic vector graph database, in our case, that as you again, the example I gave, as it ingests a sentence or paragraph, it basically will update the model immediately, and it will, you know, pick up contradictions or gaps in knowledge that need to be filled and and can think and reason about that. So it is really very, very different.
[00:09:24] Unknown:
That idea of updating the conceptual model of the AI system and ensuring that the data that it's fed has some impact, it also brings up the question of skepticism or being aware of the potential for fallacies or misleading information, which we as humans have problems with. It also brings up the idea of garbage in, garbage out for AI systems, and I'm curious how you think about the validation and quality gating of information that you feed into these models to ensure that they don't build some updated view of the world that is based entirely on propaganda or false concepts?
[00:10:07] Unknown:
Right. Yes. It's a it's a good question. And, I mean, humans are actually quite capable, at their best, of, you know, discerning what is what is true information or not. It's it's really our reptile brain, our emotions that kind of get in the way. So an AGI really won't have that handicap that'll be emotional about something where the ego gets in the way or some yeah. Just being part of a tribe that you have to agree with it. It's inherently not going to have those kind of emotional barriers. And it'll also be avail it'll be aware of the the kind of fallacies that a cognitive system inherently can can fall into. So it'll be more easily be able to compensate and adjust.
Now what we're also talking about, of course, is is that an AGI will have system 2 thinking, you know, in in terms of, kind of Daniel Kahneman's model of system 1 and system 2, you know, subconscious versus supervisory metacognitive function that basically monitors your thought process itself. And and and that's a really important aspect of intelligence is that you can think about your thinking. You know, you can monitor your thinking and say, am I getting carried away? Does this make sense? You know, should I double check it in in in some way? So I think an AGI inherently has a lot of advantages over humans to avoid a lot of these mistakes. And also, it it can plow through so much more information and double check things, whereas, you know, we wouldn't have the time, we wouldn't have the patience to to do that. So I think, you know, we're in a much better place to have a robust system. And AGIs can, of course, also, check each other, which is also important because they may start off with different sources. They may have gone down a different path of understanding some particular problem. So a good example here is, you know, if you train up on AGI as a cancer researcher, for example, it'll have a particular view of the problem.
And now you can make a 1, 000, 000 copies of it, and each 1 goes off and pursues different aspects of of that. And, you know, they could come back and then compare notes, so to speak, you know, and and see which of the particular avenues are more promising than others.
[00:12:31] Unknown:
Trying to build some more contextual understanding for people who are coming from other areas of the machine learning AI ecosystem about how to think about the technical aspects of building a cognitive AI. Things that come to mind again are reinforcement learning, agent based modeling. There's been a lot of talk about the concept of maybe we'll reach, you know, AGI just by throwing more and more data at the problem, which obviously is not going to be the whole solution. I'm wondering if you can maybe draw some parallels between other avenues of ML research and model development and how to bring that more towards the method of cognitive AI that you're discussing?
[00:13:15] Unknown:
So I I think it's really fundamentally on the wrong path to AGI. So there isn't much to be salvaged. You know, again, the GPT approach, the transformer approach, the big data approach. And DARPA actually gave a presentation a few years ago where they talk about the 3 waves of AI. The first wave of AI was what's now called good old fashioned AI, you know, sort of, really, logic based approaches and and and and so on. We are now in the second wave of AI that is basically statistical, you know, neural networks type of approaches. And the third third 1 is the the cognitive adaptive, wave, which we really haven't haven't quite reached yet. I mean, our company has been working on it for, you know, more than 15 years, but we're a small company. There's there are not not a lot of players in the cognitive AI, field right now. And let me give a few examples of of why it's fundamentally different.
So 1 of them is your starting point really has to be understanding intelligence, human intelligence. That has to be the starting point to understand cognition and say, what does cognition require, Rather than starting from, hey. We have a lot of data. We have a lot of computing power. You know, what can we do with it? That's a hammer we've got, so, you know, everything starts to look like a nail. And that's really the era we're in. And, of course, enormous strides have been made. A lot of money is being made, and a lot of money is being thrown at it. So it, you know, it it's natural that people would follow that. In fact, we have a whole generation of AI scientists now who don't know anything other than big data statistical approaches. So, you know, that that's where the momentum, momentum is right now. But if your starting point is, what does cognition require?
What does what are the core requirements of intelligence, you come to a very different conclusion. And the conclusion is, well, a couple of things. It has to be real time conceptual integration. You can't get away from that. So it can't be pretrained. The other thing that you you, understand is that intelligence is not about having knowledge. So it doesn't matter how much knowledge you have or how many skills you have trained in your model. That's not the core of intelligence. The core of intelligence is being able to acquire knowledge, to be able to acquire knowledge, to be able to learn. So so all the benchmarks and everything are really misaligned to achieving AGI because they measure how much knowledge the system has.
That's how the benchmarks are. So if you want to get published, if you want to get funded, it's all about, can you build bigger models, models that have more knowledge? But that's barking up the wrong tree. It's really understanding how to build a system that can learn. So it'll have very little knowledge initially, but it'll be very powerful in being able to acquire new knowledge, conceptually and validate it as you go along. So there has to be that shift in understanding that it's small data, real time learning that we have to to solve.
And, you know, that's a different architecture.
[00:16:38] Unknown:
And so digging into that technical aspect and the architectural concepts of building a cognitive AI and building it in such a way that it is focused on that learning activity and being able to extract concepts from the information that it's fed. I'm wondering if there are any particular modalities of data that are easiest to work with in that approach, whereas humans, we're very multimodal, multisensory. Most machine learning models I mean, we're building more into multimodal capabilities, but but most of them are focused particular sensory domain, whether it's vision or text or language or audio. And I'm wondering for purposes of building these cognitive models, how should we be thinking about the architectural approaches, the hardware capabilities, the data preparation that's necessary to be able to feed information into these systems in such a way that they're able to understand what are the actual conceptual elements and incorporate them into their understanding of the world?
[00:17:41] Unknown:
Yes. So I've recently published 2 white papers. The 1 is, why don't we have AGI yet? And so that gives some of the background that I've been talking about. And the other 1 is, gives gives a sense of what our, you know, what what our approach is and what I believe the the the right approach is. So you you need to to have a knowledge representation, which can be updated. You need to have learning mechanisms, which can be updated in real time. So you need to have long term memory, short term memory, context, you know, that you can use for understanding things, Reasoning and all of it has to be integrated in a highly efficient system.
The system that we've built, for example, our knowledge representation, our knowledge graph, is literally a 1000 times faster than any graph database that's available. And because we've specifically designed it for this purpose. Now in terms of sense input and data preparation, The model that I favor is something I call the Helen Hawking model of AGI. Now what I mean by that is think about Helen Keller, who had very limited sense sense acuity, and Stephen Hawking, who had very little dexterity, were both extremely smart people. So you can have a lot of intelligence. You can have learning ability. You can have intelligence with limited sense of security and limited dexterity.
Now I'd love to have a robot that can explore the real world and and and learn from that, but robotics is just really, really hard. It's really expensive. You can't, you know, you you have all sorts of problems running simulations and so on. So, basically, how can we build an intelligent system, and we don't have to deal with, you know, the super high resolution that humans have and vision and the dexterity that we have? So the approach that we're taking is basically saying your sense input is a computer desktop. You know?
So you can basically have potentially very complex vision because you can be looking at a video. You can have a camera to the outside world. But you can also start with something much much simpler in in learning. So you can kind of gradually crank up the the resolution of vision that you have. And then the dexterity is basically a mouse and keyboard. And, you know, between those, you you can interact with the world. So that is the approach we're taking. You know, it's not the only approach we're taking, but it, to us, gives, a balance between having a rich enough system that has some grounding and yet not that complex and overwhelming in terms of the amount of processing power that you you require.
[00:20:41] Unknown:
In terms of the application of these cognitive systems, as we've been seeing with the hype around these large language models and all of the supposed capability and applications that they have, They're you know, there's the inevitable hype cycle, but then there's also the question of building and maintaining trust and that uncanny valley moment where we think the AIs can do more than they can, and then we're very disappointed when they can't, or we apply them in settings where they have no right operating and it provides a situation of risk or potential harm.
And I'm curious, what are some of the ways that you think about those elements as we build cognitive AIs and get get closer to that epoch of, AGI, ensuring that we aren't jumping the gun, so to speak, and putting these models into environments and situations where we either run the risk of losing trust and putting us into another AI winter or causing harm because of the fact that the models aren't being properly tested and validated and monitored?
[00:21:49] Unknown:
Yes. So I think we're already seeing in in large language models, you know, some of the the the blowback. You know? I don't know if you saw the Chevrolet dealer where their their chatbot actually sold a car for $1, and then the user said, well, are you sure this is a legal agreement? And the chatbot said, yes. This is a legal agreement. You can have the car for for a dollar, you know, and some other disasters like that. So so clearly, you know, some companies implementing LLMs in sort of totally the wrong, wrong applications.
So LLMs, I mean, the technology is absolutely amazing. I think we're all blown away by you know, what it can do in terms of making suggestions and summaries and writing poems and and, you know, just the vast amount of knowledge that is there and how it can generate this, you know, fantastic text from it and summaries. I mean, it is really quite phenomenal what these LLMs can do. But the the the simple rule here is there always needs to be a human in the loop. You cannot trust these systems. You know, they they sound, extremely confident in what what they give you back.
They don't know what they're saying. They don't know how confident they can be, about things. And and that's is is basically where the danger lies. So you cannot run them autonomously. And from a practical point of view, my company is really 2 divisions. The 1 is the commercial division, where we automate call center operations. And the other 1 is our development, where we continue developing our technology to get closer and closer to human level. And, you know, we talk to a lot of companies that want to automate their call center. They want to replace call center agents because it's just hard to find them, to train them, expensive, and, you know, variable quality.
So for all these various reasons, they they they want to automate call, service calls. And we can do that extremely well with our cognitive AI, where everything is predictable. The legal department can sign off on it. The marketing department can sign off on it. And, you know, customer experience. So we get very, very good results. Companies who've tried to do that with LLMs basically just have not succeeded at all. Because if you even have, you know, 3% error rate or 5% error rate or something like that, by the time you have a conversation that has, you know, 10 steps or so, I mean, you're guaranteed to get into trouble. And, you know, if if your system is hooked up to APIs where it can, you know, give the customer a credit or cancel an order or change the delivery address or whatever it might be, you know, it's just it's it's just a disaster. So so, really, the the correct applications for LLMs are as a tool, where a human is always in charge and in control. And it can be extremely powerful for that, you know, where you can ingest a company's all of their documentation, and you can do a query, and it can give, you know, give you some information. But the human has to say, well, is this actually relevant for what I'm talking about? Is it appropriate? Is it you know, does it make sense?
[00:25:16] Unknown:
1 of the challenges there is that the overarching promise and goal of AI is that it helps with automation, and we wanna be able to fully automate things that require humans. Obviously, reducing the amount of time and energy required by humans to achieve a particular task is beneficial, but the pursuit of profit is always going to try and move further and further towards full automation and maximizing capabilities. And I'm curious, what are some of the either regulatory or ethical or business safeguards that we need to be considering as we focus more on bringing AI to the level of capability and capacity or, like, beyond the capacity of humans?
[00:26:05] Unknown:
I I think the, the normal sort of business rules, norms, and laws really apply. I think, you know, a company is liable for the product that it produces. But there are such enormous benefits to be gained by having AGI to have human level. A lot of the the concerns, a lot of the problems we have right now with AI is is really that it isn't smart enough. So it's a lack of intelligence that's causing a lot of problems rather than it being too smart. And the reason I'm pursuing, AGI so vigorously is I really believe that it will enhance human flourishing tremendously.
And I see that in a number of areas. I could maybe put it into sort of 3 different buckets. The 1 I already mentioned in passing, and that is in research. You know, again, imagine having a cancer researcher, AI cancer researcher trained up, and you can now make a 1000000 copies of that. We are going to make so much more progress, rapid progress, in cancer. Now you apply that to all sorts of different areas where research can really enhance human life. All other diseases, pollution, energy, and food production, whatever, nanotechnology.
There are so many areas that having more scientists that don't have egos getting in the way, that can work 20 fourseven, that are much more rational than we are as humans. For humans, rationality is an evolutionary afterthought. We're not really that good at it. For AIs, it'll be their natural way of cognition, their natural cognition. So, AI scientists helping us solve a lot of the problems that face humanity, I think, is just very exciting to me. The second bucket would be just very dramatically reducing the cost of goods and services, which basically just creates wealth. More people will be able to afford things by dramatically reducing the cost of goods and services.
And the 3rd area that I find, I think actually the most exciting, is what I call a personal, personal, personal assistant. Well, the reason I have 3 personals there is there are 3 different meanings of the word personal that are relevant here. The first 1 is it's your personal assistant. You own it. It serves your agenda, not some mega corporation's agenda. The second personal is it's hyper personalized to you. It gets to know your history, your dreams, your goals, your preferences, who you do business with and so on. And the 3rd personal is the 1 of privacy, that you decide what it shares with whom.
And I think if each of us had this personal, personal assistant, it's like an exocortex, really. It's an extension of our own cognition that can, on the other hand, be like an angel on our shoulder that can kind of give us good advice, help us avoid some of the mistakes that we tend to make when our emotions run away or we don't think carefully enough about things. So I think AGI will be, tremendous, tremendously helpful for human flourishing.
[00:29:38] Unknown:
And in terms of the technical capacity, the available body of research, and, the general availability of information about how to build these cognitive AI systems. Wondering if you can give your sense of the state of the ecosystem for that and some of the work that can and should be done to improve the general availability and awareness of this approach towards AI systems?
[00:30:08] Unknown:
So at the moment, all the all the momentum is is clearly in large language models. You know? That's where, you know, 1, 000, 000, 000 and 1, 000, 000, 000 of dollars are flowing into it. And, you know, there are tremendous tools, and and there's a tremendous ecosystem for that. And that's not going away anytime quickly because it just has so much momentum. And, you know, and the, unfortunately, the way VCs operate is they tend to follow momentum, you know, investing generally since I think the dotcom boom has turned from value investing into momentum investing. You know, that money goes where there's action.
So we need to really see that change in cognitive AI being recognized and accepted and money flowing into that. At the moment, there really is very little in terms of commonly available infrastructure available. We're 1 of the very few companies working on it. Intel has a fairly sizable, internal project for Cognitive AI, and there are a few others. But it's yeah. There's really very, very little, effort. And, you know, we are trying to actually scale up our system, our team now. That's what we're looking for, investments to accelerate the development that we're doing on the AGI side.
But there is really very little infrastructure available right now, unfortunately.
[00:31:42] Unknown:
Given that lack of investment, lack of infrastructure for cognitive AI approaches, what are maybe some of the ways that the existing ecosystem of tooling can be repurposed and bent towards the construction and growth of cognitive AI?
[00:32:00] Unknown:
So I think there's a there's a great open source community that could could be leveraged. And, you know, we're exploring that as a possibility to get get people excited. But we need to have enough of a core infrastructure, and we really need to have, enough resources to be able to support an open source community as well. So that's 1 area. Of course, the fact that hardware continues to be pushed out and accelerated is very useful. In fact, what LLMs have shown or what they've demonstrated, I think, is that we're probably not that far away from having enough hardware. I mean, when you look at the inference cycle, you can now run that on a smartphone.
You know, barely, but you you you can. So we're actually very close if so if the inference for a cognitive system isn't much more expensive than inference for a statistical system, which actually I believe will be the case, from based on our experience and so on, It basically means you can run an AGI, a cognitive, AI. You can run it on, you know, on a phone or on a on a small computer. We're also seeing that in terms of training a system, it requires a tiny, tiny fraction of of the the hardware. I mean, we train our systems on, you know, 5 year old laptops, basically. I mean, not always, but we can.
Now, of course, our our models are still relatively small. But even even that, we we know we're near hitting hardware limitations. So I think the biggest, benefit of l of LLMs to building AGI, cognitive AI, is the vast amount of information that is embedded in the LLMs. So if you can extract extract it reliably, and I think there are ways of doing that by asking the LLM in the right kind of way. I mean, a lot of the the disasters we're seeing with LLMs and a lot of really egregious errors, are, a, people trying to break them, and, you know, that's where we see a lot of the examples, or people not being super careful about it. So I think if you can set up your your cognitive AI training system, it can actually extract a lot of valuable information. You can cross check that as well, you know, with other sources, with other LLMs, and you can ask the same question in different ways and see if it gives you the same answer. So I think that's extremely valuable.
Because 1 of the concerns over the years, for building AGI from our our perspective has always been, how do we teach the system all of this common sense knowledge that it needs to have? And with LLMs, I think that has become a lot more feasible.
[00:35:01] Unknown:
And then another aspect of the whole AI applications, the regulatory burdens around it, the development and maintenance of trust, and particularly as we start thinking about bringing AI into more sensitive areas, particularly in terms of things like medicine or transportation, is the requirement around explainability and understandability. And then in the creative arena being able to appropriately provide attribution. Then I'm wondering how the cognitive approach to AI modeling and development either helps or complicates that as aspect of the problem.
[00:35:43] Unknown:
So, inherently, our our design is not a black box at all. So, you know, everything is scrutable. Now having said that, of course, some of the knowledge, was acquired in very complex ways, and, you know, a lot of thought went into it. So it might be it might take quite a bit of, analysis to pin down exactly why the AI came to a certain conclusion. But you can also ask it because it has metacognition, you know, like you can ask a human, where they got it from. But in addition to it, because it's learning incrementally, you can literally tag every piece of information that it acquires of where it got the information from, what the source source was, you know, when you got the information, who you got it from. And then, of course, the reliability of the source can be, you know, can can be tagged. So it's a huge advantage that we have with, our cognitive AI approaches.
You really have that transparency. You can audit the system, and you can you can tag, knowledge sources. So it, you know, it overcomes a lot of it really overcomes all of the the limitations of of LLMs. 1 of the things you asked me earlier that I think I didn't didn't quite get to is, how do we train the system? You know, I mentioned that we can extract information from an LLM. But the the core knowledge that the system needs to acquire, it needs a careful curriculum to do this effectively. So, and and, in fact, a large part of our team is what we call AI psychologists. They they, their skill is basically linguistics, education, cognitive psychology, and so on. So it's finding the most effective way to quickly teach the system what it needs to know in a robust way. So that's a big part of the technology of achieving cognitive AI, is building this effective curriculum of of training the system.
[00:37:52] Unknown:
And another element of the problem, as you're talking about being able to tag all of this information as it's being ingested and incorporated into the model, it brings up the question of storage capacity and the retrieval methods where with large language models, they're effectively a compression algorithm on top of information. And then you have things like retrieval augmented generation where you have additional context corpora that you can pass to the LLM during the question response cycle for cognitive AIs because you're feeding it in incrementally and the knowledge is being incorporated into the model in real time. I'm curious how you need to think about the storage and retrieval architecture of the model and how much of the that knowledge lies resident in the, you know, the incrementally built binary or whatever the actual artifact is versus how much of it is a, storage and retrieval system and the model knowing how and when to retrieve different pieces of information?
[00:38:53] Unknown:
Yeah. Very good question. Now first of all, we see the system, again, as a Helen Hawking type of system. So it will not have inherently a lot of visual information in its model or a lot of sort of dexterity encoded. So that massively reduces the amount of memory you need. Because when it comes down to natural language, you know you can have several lifetimes' worth of natural language interaction. All the books you've read and so on, they can easily be kept in today's memory. So our model is completely kept in in memory. It's not trying to compete with LLMs that have all the world's knowledge in the 1 model.
You don't really need that because the system will also be an excellent tool user, you know, which is, of course, 1 of the unique features of human level intelligence, is that we we can easily get to information. And an an AGI, of course, will have it much easier because it can literally think about something and look it up in a database somewhere or query it in an LLM or, you know, whatever. So I see the the the cognitive AI is it will, you know, have sort of the knowledge or maybe a few times the knowledge that a human would would have, but not the knowledge of all humanity, of all mankind. Doesn't need to do that. And they will tend to specialize in certain areas of whatever. It's your personal personal assistant, if it's a cancer researcher, or whatever. But they'll have the general, ability to, of course, plug into the Internet to, you know, and and instantaneously call up information, that that they need.
So I don't see memory limitation as being a problem. You know, that, you know, if you if you 10 you know, gigabytes or tens of gigabytes of of of RAM, an enormous amount of information you can store in there.
[00:40:54] Unknown:
You preempted my next question by saying that the cognitive AI could be interacting with the LLMs, which would that was gonna be 1 of as you were saying, being able to retrieve and interact with external systems to be able to retrieve and process and, I I guess, evaluate the quality of information. I think that that's also an interesting application as well of being able to put these in the path of these large language models where you have the, effectively, the world's knowledge compressed into these systems and then being able to have something do that processing of information before feeding it back to the human so that the human doesn't have to waste their cycles doing that evaluation of is is this thing that it said back to me actually accurate or true, or is it just making things up and hallucinating?
[00:41:38] Unknown:
Yes. Well, in fact, right now, of course, our our reference point is current LLMs. But once you get to AGI or close to AGI, what I see happening is is really that it will extract information from LLMs and restore it in a validated structured, knowledge graph, you know, or multiple knowledge graphs, so that you're essentially building something more like Wikipedia. Okay. Wikipedia has, you know, some political problems, but you you basically have a reliable knowledge graph, an external knowledge graph, that draws information from LLMs, but that has already been validated and tagged. Because once you ask a large language model about information, a piece of information, you can then also follow-up and say, well, where did it probably get this information from? And you can do a Google search, or you can do other ways. So you can essentially annotate and clean up the information, from LLMs.
And, you know, then you'll have a much more reliable source, reference source after that.
[00:42:46] Unknown:
In terms of your experience working in this space, helping to drive forward this area of cognitive AI and this and that approach towards AGI? What are some of the most interesting or innovative or unexpected ways that you've seen these cognitive models used?
[00:43:02] Unknown:
Well, it, you know, it it really blew me away, as it did most people, on, how it can generate, you know, valid text, basically, you know, prose. And that really is, is is quite amazing. And, you know, we use it for brainstorming, to summarize things, to get ideas, and it's it's really, really powerful. And I think that was quite unexpected that just by scaling up, you know, the early GPT models, that you could get this quality of of natural language being, being generated and the amount of information there is. So, I mean, that that still amazes me, you know, when I when I use these tools.
[00:43:46] Unknown:
And in your work of building cognitive models, what are the most interesting or unexpected or challenging lessons that you've learned personally, and maybe some of the key takeaways that you'd like to share with the listeners?
[00:44:00] Unknown:
So we are actually, you know, very pleased in how we've been able to commercialize early versions of our AGI, our proto AGI. You know, I don't wanna call our current technology AGI, but our cognitive cognitive AI approach. You know, 1 of 1 of our big customers is, 1 800 Flowers group of companies, Harry and David and so on. And we're using this very, very effectively, you know, the parts we've implemented, some very complex interactions, with with customers and products where, you know, we get just under 90% of self-service, in where the system gets to know what the customer buys whom the customer buys gifts for, what the relationship is, what kind of gifts they buy for them. And it it has all the business rules that it needs to deal with, in interface with APIs, back end systems, and so on. And it can do that very effectively in an auditable, reliable way.
So that that's really fantastic. The the limitation right now is that there's still a lot of human labor involved in teaching the system, in setting up the ontologies and so on, which as we develop AGI capabilities further, the system will be able to learn this information by itself to to a much bigger extent. Now 1 of 1 of the applications we'd love to implement, we we're talking to a number of universities, hopefully, we'll do something, would be as a student assistant, you know, to have a cognitive AI that is your personal personal assistant as a student, especially when you just get to university first and, you know, everything overwhelms you, and it can help you find your way around, help you with your studies and, you know, where to get books and meals and friendships and and study groups and and and whatnot. So those are the kind of applications we really like as a stepping stone towards, you know, getting to our goal of true AGI.
[00:46:08] Unknown:
And for people who are working in the space of machine learning and AI, and they're trying to build applications, build systems, what are the cases where a cognitive approach is the wrong choice or overkill?
[00:46:22] Unknown:
Well, for the kind of search that we have I mean, first of all, cognitive AI is still in its infancy, you know? So you don't have a lot of choices available to, to you. So LLMs are really good at taking a lot of information and being able to give it back to you in a sort of semi reliable way. You know? So if a company has a lot of documents that you can basically fine tune your system or train your system with or have external vector database for it, it can be very effective in helping you that I think it's excellent for idea generation, for summaries.
So there you know, there are tons and tons of of applications where LLMs are really useful. Now, you know, as we were talking, once you get full AGI, LLMs are really gonna be redundant, you know, eventually, because you will have more reliable data sources that'll that'll have been created. But, you know, we're not quite there yet.
[00:47:37] Unknown:
And as you continue to build and iterate on the work that you're doing at AIGO and in your research efforts, what are some of the things you have planned for the near to medium term or any particular projects or problem areas that you're excited to explore further?
[00:47:52] Unknown:
So the on the commercial side, the most obvious area that we're already successful at is replacing call center agents. A lot of people talk about augmenting and, you know, and LLMs can certainly help in that where, again, they can expose FAQs and things like that and help call center agents. But ultimately, that only buys you so much. Again, people talk about it's going to replace programmers or so. No, it's not. No, it's not. I mean, I program every day. I use these tools. Too many times, they give you the wrong answer, and you end up spending more time debugging the code that it gave you than what it would have taken you in the first place to write it. So, you know, it it's a productivity tool, but it's not gonna replace, programmers. So replacing call center agents is a very obvious thing for us to continue expanding, and and that's on the commercial side really what our most obvious 1 is. And there are you know, tens of millions of people across the world doing those kinds of jobs. And most people don't particularly like doing it, either.
It's a shocking statistic that, I I believe the average longevity of a call center agent is 6 weeks. Now, obviously, that includes in in the average, a lot of people who quit after the 1st day or something and say or the 1st week and say, I can't handle this, you know? People screaming at me all all day long or sitting in a cubicle. So, you know, that that's an obvious, area that will continue. Now on on the development side, you know, we are really, just looking for funding. We have a very detailed road map of what we need to do to get to AGI, but we can't do it with, you know, the 3 dozen people that we have.
You know, we I don't think we need thousands of people working on it, but we do need, you know, at least 50 people or so in the team that we have very specific development that has to happen to crank crank up the IQ of the system. And the challenge in in convincing people of cognitive AI, that cognitive AI is the way to go, is that almost everybody in the field of AI these days has a background of statistics, mathematics, logic. You know, that's those are the skills that are required for for generative AI. And they can't really even begin to relate to a cognitive approach. When you talk about mental processes, thinking and you know, reasoning and concept formation and and and so on, you just get blank stares. You know? And so it's very difficult to even have a conversation of why cognitive AI or how cognitive AI can work. That that that is a real challenge.
[00:50:51] Unknown:
Are there any other aspects of the work that you're doing at AIGO or the overall space of cognitive AI and its juxtaposition to the statistical approaches that we've been using for ML and AI systems that we didn't discuss yet that you'd like to cover before we close out the show?
[00:51:08] Unknown:
No. I think it's, you know, it's really just to recap, you know, LLMs have a lot of use cases. And, obviously, there is a lot of money. There's a lot of momentum. And a lot of people will make a lot of money with it, and more people will lose money because, obviously, most of the startups will fail because there really isn't much of a moat. The moat is, you know, do you have more money to train bigger systems? But apart from that, you know, specialized knowledge in a particular field might be the moat, but that's not scalable, not highly scalable.
So, you know, that's where we are. But the the 3 big problems that aren't gonna go away that are inherent in them is their lack of reliability, their lack of being able to learn in real time and update their model, and the cost. They are actually very expensive. You know, when when you start trying to implement them, they're very expensive to train, and they're also not cheap to run. So those those are the inherent limitations that, you know, are are there. And cognitive AI, the biggest problem is that we don't have enough people working on it. There's not enough money flowing into it. When people ask me when will we get AGI, typically, I answer, I don't measure it in years. I measure it in dollars.
It's it's really, you know, how soon will we have some real money flowing into the field that we can, you know, actually develop it develop it to human level.
[00:52:36] Unknown:
Alright. Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest barrier to adoption for machine learning today.
[00:52:51] Unknown:
Well, I think I just mentioned, you know, it's reliability, the lack of real time training, learning, and the cost. So, you know, I think that's they they they're pretty obvious.
[00:53:06] Unknown:
Alright. Well, thank you very much for taking the time today to join me and share the work that you've been doing on developing and improving the cognitive modeling approach and your journey towards AGI. It's definitely a very interesting body of work that you're doing. I appreciate all the time and energy that you and your team are putting into that, and I hope you enjoy the rest of your day.
[00:53:28] Unknown:
Well, thank you, Tomas.
[00:53:36] Unknown:
Thank you for listening. Don't forget to check out our other shows, podcast.init, which covers the Python language, its community, and the innovative ways it is being used, and the Machine Learning Podcast, which helps you go from idea to production with machine learning. Visit the site at dataengineeringpodcast.com. Subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at data engineering podcast.com with your story. And to help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Hello, and welcome to the Data Engineering Podcast, the show about modern data management. A new approach to building and running data platforms and data pipelines. It is an open source, cloud native orchestrator for the whole development life cycle with integrated lineage and observability, a declarative programming model, and best in class testability. Your team can get up and running in minutes thanks to DAXTER Cloud, an enterprise class hosted solution that offers serverless and hybrid deployments, enhanced security, and on demand ephemeral test deployments. Go to data engineering podcast.com/daxter today to get started, and your first 30 days are free. Data lakes are notoriously complex.
For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte scale SQL analytics fast at a fraction of the cost of traditional methods so that you can meet all of your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and DoorDash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first class support for Apache Iceberg, Delta Lake and Hoody, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com slash starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey, and today I'm interviewing Peter Voss about what is involved in making your AI applications more human. So, Peter, can you start by introducing yourself?
[00:01:49] Unknown:
Yes. I'm, thanks for having me. It's I'm Peter Voss. I'm CEO and chief scientist of aigo.ai. And my mission is really to bring humanlikehumanlevelintelligence to the world.
[00:02:04] Unknown:
And do you remember how you first got involved in machine learning?
[00:02:08] Unknown:
Yes. Well, actually, machine learning is just sort of a sideline. I guess today most people, when they think about AI or what they know about AI, is only about machine learning. But machine learning is only a small part of the field of artificial intelligence. So how I got into AI is, actually, I started out as an electronics engineer, started my own company. Then I fell in love with software, and my company turned into a software company, which ended up being quite successful. We went from the garage to 400 people and did an IPO.
It's when I exited that company I had enough sort of time on my hands to say, what big project do I want to tackle? And what occurred to me is that software really is very dumb. You know, if the programmer doesn't think of something, it'll just crash or give you an error message. It doesn't have common sense. So how can we make software intelligent? How can we have software that actually has common sense, that can learn, and that can reason? And that's what started me on on the journey of trying to figure out how to build real artificial intelligence.
And that was some 20 plus years ago.
[00:03:21] Unknown:
You mentioned that you are focused on trying to make AI more human like or human level. And I'm wondering if you could just start by unpacking what that even means for an AI to be humanlike.
[00:03:33] Unknown:
Yes. So if we go back to when the term AI was coined some 60 years ago, what they had in mind was to have machines that can think, reason, and learn the way humans do, the way we do. And they actually thought they could crack this problem in a few years. Well, of course, it turned out to be much, much harder than that. So what happened in the field of AI is that it really became narrow AI, solving 1 problem at a time. So a good example of that would be Deep Blue, the IBM's world champion chess software, you know, whether it's container optimization or some medical diagnosis or whatever it might be. They narrow problems that are solved. Even the Go, you know, AlphaGo, it's just that 1 problem that is being solved.
And the kicker here is that it's not actually the software that has the intelligence. It's the programmer who puts together the particular software to solve that particular problem. So when I started studying, AI and I started really I spent a lot of time. I spent about 5 years studying all different aspects of intelligence and AI and and and so on. And what I realized is that there are core requirements of intelligence, and those are basically to be able to learn and reason interactively. So in around about 2, 000, I got together with some other people who wanted to capture the original dream of artificial intelligence to build thinking machines.
And we actually wrote a book, and the title of the book was artificial general intelligence or AGI, which has now become a very common term. So we coined the term. 3 of us actually coined the term, AGI in 2, 000 and 1. And the difference between AGI and, And the difference between AGI and conventional AI is that it can learn interactively, and it can learn in a way that conceptualizes things. So let me give you an example here. You know, if you have if you're a scientist or researcher or just somebody learning stuff, you might read a book or you might read an article, and you you you integrate every sentence that you read. Every sentence that you read either triggers something and say, Oh, okay. I already know this, or this contradicts something I know, or this makes me think about something, or I should look up some further details.
So we actually digest the information that we read sort of sentence by sentence, paragraph by paragraph. And that that updates the model that we have of the world. And that is what real intelligence is is is about, what we call cognitive AI. Whereas what everybody's doing today is generative statistical AI, which just blindly ingests all of the information without trying to make sense of it. So that's a big difference. So AGI or cognitive AI is really to get to build thinking machines, machines that think conceptually.
[00:06:46] Unknown:
That aspect of updating the model continuously, having some linkages between concepts, brings up a lot of other parallel fields to AI, and also some of the, statistical approaches to machine learning. In particular, it brings to mind Bayesian statistics, which also brings up the idea of causal modeling. And then also from that concept linkage perspective, it brings up the idea of knowledge graphs, which are also being used to supplement things like generative AIs or other machine learning models. And I'm wondering if you can give me some idea of some of the overlaps and some of the ways that, those concepts are disjoint from Cognitive AI, the way that you think about it.
[00:07:29] Unknown:
Right. So I think the most fundamental problem with, current generative systems is that, in fact, the word GPT already gives us a very good clue. G generative. So it makes up stuff basically from, you know, all of the mass knowledge that it's acquired. But the knowledge it's acquired is good, bad, and ugly. It hasn't, you know, it hasn't been integrated, sifted, hasn't been validated. It's just it's just in there. So you basically generate stuff, you know, next next token, essentially, from there. And, of course, that's amazingly powerful, but that's what it does. Now, g so the p is pretrained, which inherently tells you it's not really gonna learn anything when you use it.
So, you know, it's essentially a read only system. Now, of course, there are different tweaks that people try to use to overcome it, but the bottom line is the model itself does not change. And the T is it's transformer based, which basically means it's a back propagation type system, which has to be batch trained. So as long as you're using transformers, you are locked into that approach. Now, cognitive AI, and the way we approach it is we do have a dynamic knowledge graph that gets updated in real time. So it can't be a separate external database. It has to be the actual model itself, has to be this dynamic vector graph database, in our case, that as you again, the example I gave, as it ingests a sentence or paragraph, it basically will update the model immediately, and it will, you know, pick up contradictions or gaps in knowledge that need to be filled and and can think and reason about that. So it is really very, very different.
[00:09:24] Unknown:
That idea of updating the conceptual model of the AI system and ensuring that the data that it's fed has some impact, it also brings up the question of skepticism or being aware of the potential for fallacies or misleading information, which we as humans have problems with. It also brings up the idea of garbage in, garbage out for AI systems, and I'm curious how you think about the validation and quality gating of information that you feed into these models to ensure that they don't build some updated view of the world that is based entirely on propaganda or false concepts?
[00:10:07] Unknown:
Right. Yes. It's a it's a good question. And, I mean, humans are actually quite capable, at their best, of, you know, discerning what is what is true information or not. It's it's really our reptile brain, our emotions that kind of get in the way. So an AGI really won't have that handicap that'll be emotional about something where the ego gets in the way or some yeah. Just being part of a tribe that you have to agree with it. It's inherently not going to have those kind of emotional barriers. And it'll also be avail it'll be aware of the the kind of fallacies that a cognitive system inherently can can fall into. So it'll be more easily be able to compensate and adjust.
Now what we're also talking about, of course, is is that an AGI will have system 2 thinking, you know, in in terms of, kind of Daniel Kahneman's model of system 1 and system 2, you know, subconscious versus supervisory metacognitive function that basically monitors your thought process itself. And and and that's a really important aspect of intelligence is that you can think about your thinking. You know, you can monitor your thinking and say, am I getting carried away? Does this make sense? You know, should I double check it in in in some way? So I think an AGI inherently has a lot of advantages over humans to avoid a lot of these mistakes. And also, it it can plow through so much more information and double check things, whereas, you know, we wouldn't have the time, we wouldn't have the patience to to do that. So I think, you know, we're in a much better place to have a robust system. And AGIs can, of course, also, check each other, which is also important because they may start off with different sources. They may have gone down a different path of understanding some particular problem. So a good example here is, you know, if you train up on AGI as a cancer researcher, for example, it'll have a particular view of the problem.
And now you can make a 1, 000, 000 copies of it, and each 1 goes off and pursues different aspects of of that. And, you know, they could come back and then compare notes, so to speak, you know, and and see which of the particular avenues are more promising than others.
[00:12:31] Unknown:
Trying to build some more contextual understanding for people who are coming from other areas of the machine learning AI ecosystem about how to think about the technical aspects of building a cognitive AI. Things that come to mind again are reinforcement learning, agent based modeling. There's been a lot of talk about the concept of maybe we'll reach, you know, AGI just by throwing more and more data at the problem, which obviously is not going to be the whole solution. I'm wondering if you can maybe draw some parallels between other avenues of ML research and model development and how to bring that more towards the method of cognitive AI that you're discussing?
[00:13:15] Unknown:
So I I think it's really fundamentally on the wrong path to AGI. So there isn't much to be salvaged. You know, again, the GPT approach, the transformer approach, the big data approach. And DARPA actually gave a presentation a few years ago where they talk about the 3 waves of AI. The first wave of AI was what's now called good old fashioned AI, you know, sort of, really, logic based approaches and and and and so on. We are now in the second wave of AI that is basically statistical, you know, neural networks type of approaches. And the third third 1 is the the cognitive adaptive, wave, which we really haven't haven't quite reached yet. I mean, our company has been working on it for, you know, more than 15 years, but we're a small company. There's there are not not a lot of players in the cognitive AI, field right now. And let me give a few examples of of why it's fundamentally different.
So 1 of them is your starting point really has to be understanding intelligence, human intelligence. That has to be the starting point to understand cognition and say, what does cognition require, Rather than starting from, hey. We have a lot of data. We have a lot of computing power. You know, what can we do with it? That's a hammer we've got, so, you know, everything starts to look like a nail. And that's really the era we're in. And, of course, enormous strides have been made. A lot of money is being made, and a lot of money is being thrown at it. So it, you know, it it's natural that people would follow that. In fact, we have a whole generation of AI scientists now who don't know anything other than big data statistical approaches. So, you know, that that's where the momentum, momentum is right now. But if your starting point is, what does cognition require?
What does what are the core requirements of intelligence, you come to a very different conclusion. And the conclusion is, well, a couple of things. It has to be real time conceptual integration. You can't get away from that. So it can't be pretrained. The other thing that you you, understand is that intelligence is not about having knowledge. So it doesn't matter how much knowledge you have or how many skills you have trained in your model. That's not the core of intelligence. The core of intelligence is being able to acquire knowledge, to be able to acquire knowledge, to be able to learn. So so all the benchmarks and everything are really misaligned to achieving AGI because they measure how much knowledge the system has.
That's how the benchmarks are. So if you want to get published, if you want to get funded, it's all about, can you build bigger models, models that have more knowledge? But that's barking up the wrong tree. It's really understanding how to build a system that can learn. So it'll have very little knowledge initially, but it'll be very powerful in being able to acquire new knowledge, conceptually and validate it as you go along. So there has to be that shift in understanding that it's small data, real time learning that we have to to solve.
And, you know, that's a different architecture.
[00:16:38] Unknown:
And so digging into that technical aspect and the architectural concepts of building a cognitive AI and building it in such a way that it is focused on that learning activity and being able to extract concepts from the information that it's fed. I'm wondering if there are any particular modalities of data that are easiest to work with in that approach, whereas humans, we're very multimodal, multisensory. Most machine learning models I mean, we're building more into multimodal capabilities, but but most of them are focused particular sensory domain, whether it's vision or text or language or audio. And I'm wondering for purposes of building these cognitive models, how should we be thinking about the architectural approaches, the hardware capabilities, the data preparation that's necessary to be able to feed information into these systems in such a way that they're able to understand what are the actual conceptual elements and incorporate them into their understanding of the world?
[00:17:41] Unknown:
Yes. So I've recently published 2 white papers. The 1 is, why don't we have AGI yet? And so that gives some of the background that I've been talking about. And the other 1 is, gives gives a sense of what our, you know, what what our approach is and what I believe the the the right approach is. So you you need to to have a knowledge representation, which can be updated. You need to have learning mechanisms, which can be updated in real time. So you need to have long term memory, short term memory, context, you know, that you can use for understanding things, Reasoning and all of it has to be integrated in a highly efficient system.
The system that we've built, for example, our knowledge representation, our knowledge graph, is literally a 1000 times faster than any graph database that's available. And because we've specifically designed it for this purpose. Now in terms of sense input and data preparation, The model that I favor is something I call the Helen Hawking model of AGI. Now what I mean by that is think about Helen Keller, who had very limited sense sense acuity, and Stephen Hawking, who had very little dexterity, were both extremely smart people. So you can have a lot of intelligence. You can have learning ability. You can have intelligence with limited sense of security and limited dexterity.
Now I'd love to have a robot that can explore the real world and and and learn from that, but robotics is just really, really hard. It's really expensive. You can't, you know, you you have all sorts of problems running simulations and so on. So, basically, how can we build an intelligent system, and we don't have to deal with, you know, the super high resolution that humans have and vision and the dexterity that we have? So the approach that we're taking is basically saying your sense input is a computer desktop. You know?
So you can basically have potentially very complex vision because you can be looking at a video. You can have a camera to the outside world. But you can also start with something much much simpler in in learning. So you can kind of gradually crank up the the resolution of vision that you have. And then the dexterity is basically a mouse and keyboard. And, you know, between those, you you can interact with the world. So that is the approach we're taking. You know, it's not the only approach we're taking, but it, to us, gives, a balance between having a rich enough system that has some grounding and yet not that complex and overwhelming in terms of the amount of processing power that you you require.
[00:20:41] Unknown:
In terms of the application of these cognitive systems, as we've been seeing with the hype around these large language models and all of the supposed capability and applications that they have, They're you know, there's the inevitable hype cycle, but then there's also the question of building and maintaining trust and that uncanny valley moment where we think the AIs can do more than they can, and then we're very disappointed when they can't, or we apply them in settings where they have no right operating and it provides a situation of risk or potential harm.
And I'm curious, what are some of the ways that you think about those elements as we build cognitive AIs and get get closer to that epoch of, AGI, ensuring that we aren't jumping the gun, so to speak, and putting these models into environments and situations where we either run the risk of losing trust and putting us into another AI winter or causing harm because of the fact that the models aren't being properly tested and validated and monitored?
[00:21:49] Unknown:
Yes. So I think we're already seeing in in large language models, you know, some of the the the blowback. You know? I don't know if you saw the Chevrolet dealer where their their chatbot actually sold a car for $1, and then the user said, well, are you sure this is a legal agreement? And the chatbot said, yes. This is a legal agreement. You can have the car for for a dollar, you know, and some other disasters like that. So so clearly, you know, some companies implementing LLMs in sort of totally the wrong, wrong applications.
So LLMs, I mean, the technology is absolutely amazing. I think we're all blown away by you know, what it can do in terms of making suggestions and summaries and writing poems and and, you know, just the vast amount of knowledge that is there and how it can generate this, you know, fantastic text from it and summaries. I mean, it is really quite phenomenal what these LLMs can do. But the the the simple rule here is there always needs to be a human in the loop. You cannot trust these systems. You know, they they sound, extremely confident in what what they give you back.
They don't know what they're saying. They don't know how confident they can be, about things. And and that's is is basically where the danger lies. So you cannot run them autonomously. And from a practical point of view, my company is really 2 divisions. The 1 is the commercial division, where we automate call center operations. And the other 1 is our development, where we continue developing our technology to get closer and closer to human level. And, you know, we talk to a lot of companies that want to automate their call center. They want to replace call center agents because it's just hard to find them, to train them, expensive, and, you know, variable quality.
So for all these various reasons, they they they want to automate call, service calls. And we can do that extremely well with our cognitive AI, where everything is predictable. The legal department can sign off on it. The marketing department can sign off on it. And, you know, customer experience. So we get very, very good results. Companies who've tried to do that with LLMs basically just have not succeeded at all. Because if you even have, you know, 3% error rate or 5% error rate or something like that, by the time you have a conversation that has, you know, 10 steps or so, I mean, you're guaranteed to get into trouble. And, you know, if if your system is hooked up to APIs where it can, you know, give the customer a credit or cancel an order or change the delivery address or whatever it might be, you know, it's just it's it's just a disaster. So so, really, the the correct applications for LLMs are as a tool, where a human is always in charge and in control. And it can be extremely powerful for that, you know, where you can ingest a company's all of their documentation, and you can do a query, and it can give, you know, give you some information. But the human has to say, well, is this actually relevant for what I'm talking about? Is it appropriate? Is it you know, does it make sense?
[00:25:16] Unknown:
1 of the challenges there is that the overarching promise and goal of AI is that it helps with automation, and we wanna be able to fully automate things that require humans. Obviously, reducing the amount of time and energy required by humans to achieve a particular task is beneficial, but the pursuit of profit is always going to try and move further and further towards full automation and maximizing capabilities. And I'm curious, what are some of the either regulatory or ethical or business safeguards that we need to be considering as we focus more on bringing AI to the level of capability and capacity or, like, beyond the capacity of humans?
[00:26:05] Unknown:
I I think the, the normal sort of business rules, norms, and laws really apply. I think, you know, a company is liable for the product that it produces. But there are such enormous benefits to be gained by having AGI to have human level. A lot of the the concerns, a lot of the problems we have right now with AI is is really that it isn't smart enough. So it's a lack of intelligence that's causing a lot of problems rather than it being too smart. And the reason I'm pursuing, AGI so vigorously is I really believe that it will enhance human flourishing tremendously.
And I see that in a number of areas. I could maybe put it into sort of 3 different buckets. The 1 I already mentioned in passing, and that is in research. You know, again, imagine having a cancer researcher, AI cancer researcher trained up, and you can now make a 1000000 copies of that. We are going to make so much more progress, rapid progress, in cancer. Now you apply that to all sorts of different areas where research can really enhance human life. All other diseases, pollution, energy, and food production, whatever, nanotechnology.
There are so many areas that having more scientists that don't have egos getting in the way, that can work 20 fourseven, that are much more rational than we are as humans. For humans, rationality is an evolutionary afterthought. We're not really that good at it. For AIs, it'll be their natural way of cognition, their natural cognition. So, AI scientists helping us solve a lot of the problems that face humanity, I think, is just very exciting to me. The second bucket would be just very dramatically reducing the cost of goods and services, which basically just creates wealth. More people will be able to afford things by dramatically reducing the cost of goods and services.
And the 3rd area that I find, I think actually the most exciting, is what I call a personal, personal, personal assistant. Well, the reason I have 3 personals there is there are 3 different meanings of the word personal that are relevant here. The first 1 is it's your personal assistant. You own it. It serves your agenda, not some mega corporation's agenda. The second personal is it's hyper personalized to you. It gets to know your history, your dreams, your goals, your preferences, who you do business with and so on. And the 3rd personal is the 1 of privacy, that you decide what it shares with whom.
And I think if each of us had this personal, personal assistant, it's like an exocortex, really. It's an extension of our own cognition that can, on the other hand, be like an angel on our shoulder that can kind of give us good advice, help us avoid some of the mistakes that we tend to make when our emotions run away or we don't think carefully enough about things. So I think AGI will be, tremendous, tremendously helpful for human flourishing.
[00:29:38] Unknown:
And in terms of the technical capacity, the available body of research, and, the general availability of information about how to build these cognitive AI systems. Wondering if you can give your sense of the state of the ecosystem for that and some of the work that can and should be done to improve the general availability and awareness of this approach towards AI systems?
[00:30:08] Unknown:
So at the moment, all the all the momentum is is clearly in large language models. You know? That's where, you know, 1, 000, 000, 000 and 1, 000, 000, 000 of dollars are flowing into it. And, you know, there are tremendous tools, and and there's a tremendous ecosystem for that. And that's not going away anytime quickly because it just has so much momentum. And, you know, and the, unfortunately, the way VCs operate is they tend to follow momentum, you know, investing generally since I think the dotcom boom has turned from value investing into momentum investing. You know, that money goes where there's action.
So we need to really see that change in cognitive AI being recognized and accepted and money flowing into that. At the moment, there really is very little in terms of commonly available infrastructure available. We're 1 of the very few companies working on it. Intel has a fairly sizable, internal project for Cognitive AI, and there are a few others. But it's yeah. There's really very, very little, effort. And, you know, we are trying to actually scale up our system, our team now. That's what we're looking for, investments to accelerate the development that we're doing on the AGI side.
But there is really very little infrastructure available right now, unfortunately.
[00:31:42] Unknown:
Given that lack of investment, lack of infrastructure for cognitive AI approaches, what are maybe some of the ways that the existing ecosystem of tooling can be repurposed and bent towards the construction and growth of cognitive AI?
[00:32:00] Unknown:
So I think there's a there's a great open source community that could could be leveraged. And, you know, we're exploring that as a possibility to get get people excited. But we need to have enough of a core infrastructure, and we really need to have, enough resources to be able to support an open source community as well. So that's 1 area. Of course, the fact that hardware continues to be pushed out and accelerated is very useful. In fact, what LLMs have shown or what they've demonstrated, I think, is that we're probably not that far away from having enough hardware. I mean, when you look at the inference cycle, you can now run that on a smartphone.
You know, barely, but you you you can. So we're actually very close if so if the inference for a cognitive system isn't much more expensive than inference for a statistical system, which actually I believe will be the case, from based on our experience and so on, It basically means you can run an AGI, a cognitive, AI. You can run it on, you know, on a phone or on a on a small computer. We're also seeing that in terms of training a system, it requires a tiny, tiny fraction of of the the hardware. I mean, we train our systems on, you know, 5 year old laptops, basically. I mean, not always, but we can.
Now, of course, our our models are still relatively small. But even even that, we we know we're near hitting hardware limitations. So I think the biggest, benefit of l of LLMs to building AGI, cognitive AI, is the vast amount of information that is embedded in the LLMs. So if you can extract extract it reliably, and I think there are ways of doing that by asking the LLM in the right kind of way. I mean, a lot of the the disasters we're seeing with LLMs and a lot of really egregious errors, are, a, people trying to break them, and, you know, that's where we see a lot of the examples, or people not being super careful about it. So I think if you can set up your your cognitive AI training system, it can actually extract a lot of valuable information. You can cross check that as well, you know, with other sources, with other LLMs, and you can ask the same question in different ways and see if it gives you the same answer. So I think that's extremely valuable.
Because 1 of the concerns over the years, for building AGI from our our perspective has always been, how do we teach the system all of this common sense knowledge that it needs to have? And with LLMs, I think that has become a lot more feasible.
[00:35:01] Unknown:
And then another aspect of the whole AI applications, the regulatory burdens around it, the development and maintenance of trust, and particularly as we start thinking about bringing AI into more sensitive areas, particularly in terms of things like medicine or transportation, is the requirement around explainability and understandability. And then in the creative arena being able to appropriately provide attribution. Then I'm wondering how the cognitive approach to AI modeling and development either helps or complicates that as aspect of the problem.
[00:35:43] Unknown:
So, inherently, our our design is not a black box at all. So, you know, everything is scrutable. Now having said that, of course, some of the knowledge, was acquired in very complex ways, and, you know, a lot of thought went into it. So it might be it might take quite a bit of, analysis to pin down exactly why the AI came to a certain conclusion. But you can also ask it because it has metacognition, you know, like you can ask a human, where they got it from. But in addition to it, because it's learning incrementally, you can literally tag every piece of information that it acquires of where it got the information from, what the source source was, you know, when you got the information, who you got it from. And then, of course, the reliability of the source can be, you know, can can be tagged. So it's a huge advantage that we have with, our cognitive AI approaches.
You really have that transparency. You can audit the system, and you can you can tag, knowledge sources. So it, you know, it overcomes a lot of it really overcomes all of the the limitations of of LLMs. 1 of the things you asked me earlier that I think I didn't didn't quite get to is, how do we train the system? You know, I mentioned that we can extract information from an LLM. But the the core knowledge that the system needs to acquire, it needs a careful curriculum to do this effectively. So, and and, in fact, a large part of our team is what we call AI psychologists. They they, their skill is basically linguistics, education, cognitive psychology, and so on. So it's finding the most effective way to quickly teach the system what it needs to know in a robust way. So that's a big part of the technology of achieving cognitive AI, is building this effective curriculum of of training the system.
[00:37:52] Unknown:
And another element of the problem, as you're talking about being able to tag all of this information as it's being ingested and incorporated into the model, it brings up the question of storage capacity and the retrieval methods where with large language models, they're effectively a compression algorithm on top of information. And then you have things like retrieval augmented generation where you have additional context corpora that you can pass to the LLM during the question response cycle for cognitive AIs because you're feeding it in incrementally and the knowledge is being incorporated into the model in real time. I'm curious how you need to think about the storage and retrieval architecture of the model and how much of the that knowledge lies resident in the, you know, the incrementally built binary or whatever the actual artifact is versus how much of it is a, storage and retrieval system and the model knowing how and when to retrieve different pieces of information?
[00:38:53] Unknown:
Yeah. Very good question. Now first of all, we see the system, again, as a Helen Hawking type of system. So it will not have inherently a lot of visual information in its model or a lot of sort of dexterity encoded. So that massively reduces the amount of memory you need. Because when it comes down to natural language, you know you can have several lifetimes' worth of natural language interaction. All the books you've read and so on, they can easily be kept in today's memory. So our model is completely kept in in memory. It's not trying to compete with LLMs that have all the world's knowledge in the 1 model.
You don't really need that because the system will also be an excellent tool user, you know, which is, of course, 1 of the unique features of human level intelligence, is that we we can easily get to information. And an an AGI, of course, will have it much easier because it can literally think about something and look it up in a database somewhere or query it in an LLM or, you know, whatever. So I see the the the cognitive AI is it will, you know, have sort of the knowledge or maybe a few times the knowledge that a human would would have, but not the knowledge of all humanity, of all mankind. Doesn't need to do that. And they will tend to specialize in certain areas of whatever. It's your personal personal assistant, if it's a cancer researcher, or whatever. But they'll have the general, ability to, of course, plug into the Internet to, you know, and and instantaneously call up information, that that they need.
So I don't see memory limitation as being a problem. You know, that, you know, if you if you 10 you know, gigabytes or tens of gigabytes of of of RAM, an enormous amount of information you can store in there.
[00:40:54] Unknown:
You preempted my next question by saying that the cognitive AI could be interacting with the LLMs, which would that was gonna be 1 of as you were saying, being able to retrieve and interact with external systems to be able to retrieve and process and, I I guess, evaluate the quality of information. I think that that's also an interesting application as well of being able to put these in the path of these large language models where you have the, effectively, the world's knowledge compressed into these systems and then being able to have something do that processing of information before feeding it back to the human so that the human doesn't have to waste their cycles doing that evaluation of is is this thing that it said back to me actually accurate or true, or is it just making things up and hallucinating?
[00:41:38] Unknown:
Yes. Well, in fact, right now, of course, our our reference point is current LLMs. But once you get to AGI or close to AGI, what I see happening is is really that it will extract information from LLMs and restore it in a validated structured, knowledge graph, you know, or multiple knowledge graphs, so that you're essentially building something more like Wikipedia. Okay. Wikipedia has, you know, some political problems, but you you basically have a reliable knowledge graph, an external knowledge graph, that draws information from LLMs, but that has already been validated and tagged. Because once you ask a large language model about information, a piece of information, you can then also follow-up and say, well, where did it probably get this information from? And you can do a Google search, or you can do other ways. So you can essentially annotate and clean up the information, from LLMs.
And, you know, then you'll have a much more reliable source, reference source after that.
[00:42:46] Unknown:
In terms of your experience working in this space, helping to drive forward this area of cognitive AI and this and that approach towards AGI? What are some of the most interesting or innovative or unexpected ways that you've seen these cognitive models used?
[00:43:02] Unknown:
Well, it, you know, it it really blew me away, as it did most people, on, how it can generate, you know, valid text, basically, you know, prose. And that really is, is is quite amazing. And, you know, we use it for brainstorming, to summarize things, to get ideas, and it's it's really, really powerful. And I think that was quite unexpected that just by scaling up, you know, the early GPT models, that you could get this quality of of natural language being, being generated and the amount of information there is. So, I mean, that that still amazes me, you know, when I when I use these tools.
[00:43:46] Unknown:
And in your work of building cognitive models, what are the most interesting or unexpected or challenging lessons that you've learned personally, and maybe some of the key takeaways that you'd like to share with the listeners?
[00:44:00] Unknown:
So we are actually, you know, very pleased in how we've been able to commercialize early versions of our AGI, our proto AGI. You know, I don't wanna call our current technology AGI, but our cognitive cognitive AI approach. You know, 1 of 1 of our big customers is, 1 800 Flowers group of companies, Harry and David and so on. And we're using this very, very effectively, you know, the parts we've implemented, some very complex interactions, with with customers and products where, you know, we get just under 90% of self-service, in where the system gets to know what the customer buys whom the customer buys gifts for, what the relationship is, what kind of gifts they buy for them. And it it has all the business rules that it needs to deal with, in interface with APIs, back end systems, and so on. And it can do that very effectively in an auditable, reliable way.
So that that's really fantastic. The the limitation right now is that there's still a lot of human labor involved in teaching the system, in setting up the ontologies and so on, which as we develop AGI capabilities further, the system will be able to learn this information by itself to to a much bigger extent. Now 1 of 1 of the applications we'd love to implement, we we're talking to a number of universities, hopefully, we'll do something, would be as a student assistant, you know, to have a cognitive AI that is your personal personal assistant as a student, especially when you just get to university first and, you know, everything overwhelms you, and it can help you find your way around, help you with your studies and, you know, where to get books and meals and friendships and and study groups and and and whatnot. So those are the kind of applications we really like as a stepping stone towards, you know, getting to our goal of true AGI.
[00:46:08] Unknown:
And for people who are working in the space of machine learning and AI, and they're trying to build applications, build systems, what are the cases where a cognitive approach is the wrong choice or overkill?
[00:46:22] Unknown:
Well, for the kind of search that we have I mean, first of all, cognitive AI is still in its infancy, you know? So you don't have a lot of choices available to, to you. So LLMs are really good at taking a lot of information and being able to give it back to you in a sort of semi reliable way. You know? So if a company has a lot of documents that you can basically fine tune your system or train your system with or have external vector database for it, it can be very effective in helping you that I think it's excellent for idea generation, for summaries.
So there you know, there are tons and tons of of applications where LLMs are really useful. Now, you know, as we were talking, once you get full AGI, LLMs are really gonna be redundant, you know, eventually, because you will have more reliable data sources that'll that'll have been created. But, you know, we're not quite there yet.
[00:47:37] Unknown:
And as you continue to build and iterate on the work that you're doing at AIGO and in your research efforts, what are some of the things you have planned for the near to medium term or any particular projects or problem areas that you're excited to explore further?
[00:47:52] Unknown:
So the on the commercial side, the most obvious area that we're already successful at is replacing call center agents. A lot of people talk about augmenting and, you know, and LLMs can certainly help in that where, again, they can expose FAQs and things like that and help call center agents. But ultimately, that only buys you so much. Again, people talk about it's going to replace programmers or so. No, it's not. No, it's not. I mean, I program every day. I use these tools. Too many times, they give you the wrong answer, and you end up spending more time debugging the code that it gave you than what it would have taken you in the first place to write it. So, you know, it it's a productivity tool, but it's not gonna replace, programmers. So replacing call center agents is a very obvious thing for us to continue expanding, and and that's on the commercial side really what our most obvious 1 is. And there are you know, tens of millions of people across the world doing those kinds of jobs. And most people don't particularly like doing it, either.
It's a shocking statistic that, I I believe the average longevity of a call center agent is 6 weeks. Now, obviously, that includes in in the average, a lot of people who quit after the 1st day or something and say or the 1st week and say, I can't handle this, you know? People screaming at me all all day long or sitting in a cubicle. So, you know, that that's an obvious, area that will continue. Now on on the development side, you know, we are really, just looking for funding. We have a very detailed road map of what we need to do to get to AGI, but we can't do it with, you know, the 3 dozen people that we have.
You know, we I don't think we need thousands of people working on it, but we do need, you know, at least 50 people or so in the team that we have very specific development that has to happen to crank crank up the IQ of the system. And the challenge in in convincing people of cognitive AI, that cognitive AI is the way to go, is that almost everybody in the field of AI these days has a background of statistics, mathematics, logic. You know, that's those are the skills that are required for for generative AI. And they can't really even begin to relate to a cognitive approach. When you talk about mental processes, thinking and you know, reasoning and concept formation and and and so on, you just get blank stares. You know? And so it's very difficult to even have a conversation of why cognitive AI or how cognitive AI can work. That that that is a real challenge.
[00:50:51] Unknown:
Are there any other aspects of the work that you're doing at AIGO or the overall space of cognitive AI and its juxtaposition to the statistical approaches that we've been using for ML and AI systems that we didn't discuss yet that you'd like to cover before we close out the show?
[00:51:08] Unknown:
No. I think it's, you know, it's really just to recap, you know, LLMs have a lot of use cases. And, obviously, there is a lot of money. There's a lot of momentum. And a lot of people will make a lot of money with it, and more people will lose money because, obviously, most of the startups will fail because there really isn't much of a moat. The moat is, you know, do you have more money to train bigger systems? But apart from that, you know, specialized knowledge in a particular field might be the moat, but that's not scalable, not highly scalable.
So, you know, that's where we are. But the the 3 big problems that aren't gonna go away that are inherent in them is their lack of reliability, their lack of being able to learn in real time and update their model, and the cost. They are actually very expensive. You know, when when you start trying to implement them, they're very expensive to train, and they're also not cheap to run. So those those are the inherent limitations that, you know, are are there. And cognitive AI, the biggest problem is that we don't have enough people working on it. There's not enough money flowing into it. When people ask me when will we get AGI, typically, I answer, I don't measure it in years. I measure it in dollars.
It's it's really, you know, how soon will we have some real money flowing into the field that we can, you know, actually develop it develop it to human level.
[00:52:36] Unknown:
Alright. Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest barrier to adoption for machine learning today.
[00:52:51] Unknown:
Well, I think I just mentioned, you know, it's reliability, the lack of real time training, learning, and the cost. So, you know, I think that's they they they're pretty obvious.
[00:53:06] Unknown:
Alright. Well, thank you very much for taking the time today to join me and share the work that you've been doing on developing and improving the cognitive modeling approach and your journey towards AGI. It's definitely a very interesting body of work that you're doing. I appreciate all the time and energy that you and your team are putting into that, and I hope you enjoy the rest of your day.
[00:53:28] Unknown:
Well, thank you, Tomas.
[00:53:36] Unknown:
Thank you for listening. Don't forget to check out our other shows, podcast.init, which covers the Python language, its community, and the innovative ways it is being used, and the Machine Learning Podcast, which helps you go from idea to production with machine learning. Visit the site at dataengineeringpodcast.com. Subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at data engineering podcast.com with your story. And to help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Introduction to Peter Voss and AI Applications
Peter Voss's Journey into AI
Understanding Human-like AI
Challenges with Current AI Models
Validation and Quality Gating in AI
Technical Aspects of Cognitive AI
Building AGI: Approaches and Challenges
Applications and Risks of AI
Ethical and Regulatory Considerations
State of Cognitive AI Ecosystem
Explainability and Understandability in AI
Storage and Retrieval in Cognitive AI
Innovative Uses of Cognitive AI
Lessons Learned and Key Takeaways
Future Plans and Projects
Conclusion and Final Thoughts