Tim Curtis: [00:00:00] Hello, and welcome to this edition of the Digital Velocity Podcast. Joining us today is Peter Voss, a pioneer in AI who coined the term artificial general intelligence and is the CEO and Chief Scientist at Agio.ai. For the past 15 years, Voss and his team at Aigo have been perfecting an industry disruptive, highly intelligent, and hyper-personalized chatbot with a brain for large enterprise customers. To say that I’m excited about this conversation, probably doesn’t begin to do it justice. So, Peter, welcome to the show today.
Peter Voss: Yeah. Thanks for having me.
Tim Curtis: One [00:01:00] of the things that I would like to kind of start out talking a little bit about is maybe getting just a brief overview of your journey into the space, and you know, how you have ended up being such a pioneer in this field. Would just love to kind of hear how it started out.
Peter Voss: Yes, absolutely. So, I actually started out as an electronics engineer. I started my own company building electronic equipment for, you know, industrial applications. Then I fell in love with software and my company turned into a software company. I developed a comprehensive ERP package for medium-sized companies. My company did pretty well. We went from the garage to 400 people and did an IPO. So, that was great, but it’s when I exited that company, I was kind of reflecting on what was software all about and I was very proud of the software we produced, but still, it didn’t have any intelligence.
If the programmer didn’t think of some particular scenario, it would just give an error message or crash. You know, there was no common sense, [00:02:00] no learning and that’s true for pretty much all software that’s out there. So, that’s really what I wanted to understand is how can we make software intelligent?
The starting point was I needed to really understand what intelligence entails, what aspects of intelligence are important? So, I actually took off five years to study intelligence, all different aspects of intelligence, starting with epistemology, theory of knowledge. How do we know anything? What is real? How can we be certain of things? Really the very fundamentals of knowledge and understanding.
And then I studied cognitive psychology. How do children learn? How does our intelligence differ from animal intelligence and what do IQ tests measure? Are they meaningful and what are the different components of intelligence? And then of course, what had already been done in the field of AI? So, putting that all together, I came up with a design for a thinking machine.
So, in [00:03:00] 2002, I got together with some other people who had similar ideas, and the three of us actually coined the term artificial general intelligence to really capture that we wanted to go back to the original dream of AI. When the term AI was coined some 60 odd years ago, it was about building a thinking machine that can think and learn and reason the way humans do. Now, that turned out to be really hard.
So, over the decades, what has happened is AI turned into narrow AI, where you take one particular problem and you solve it using computers, but there’s an important problem with that approach. Take, for example, the breakthrough that IBM had in, I think the eighties or early nineties, with Deep Blue becoming the world chess champion. You know that is an example of narrow AI. Deep Blue work beat the world chess champion. Yet, it couldn’t even play checkers.
The problem with narrow [00:04:00] AI is it’s actually the intelligence of the programmer, or in today’s world, the data scientist, that is solving the problem using a computer. The intelligence is not actually in the computer so much as it is the programmer solving the problem, one problem at a time.
So, the idea of AGI, artificial general intelligence, is to go back to the original meaning of AI. How can we build a thinking machine, a machine that by itself can learn how to play chess and to reason about it, but can also equally easily learn checkers or any number of things the way we can learn lots of different things and solve different problems?
So, that’s really what AGI is about, going back to the original intent, to the original goal of AI. So, since 2002, I’ve been working with various teams alternating between R&D and commercializing our technology [00:05:00] to build this ever more powerful brain to get closer and closer to human-level capabilities. That’s my journey. It’s really about solving intelligence, building truly intelligent systems.
Erik Martinez: What made you focus on chatbot technology?
Peter Voss: Yeah. So, that’s an interesting question. When we did our original R&D, starting in 2001 actually. Initially, we started actually working with a virtual critter in a virtual world. We had a virtual mouse in a virtual world that would go around and learn things about the environment, and it had virtual whiskers and eyes and things. You know, pretty low resolution and primitive. So, our original idea was in fact, to have something more of an embodied or virtually embodied entity that would learn to navigate, learn how to deal with things in the world, and we then upgraded that to a virtual dog that you could give commands, and then we said, well, why don’t we work with an infant? You know, like a virtual [00:06:00] baby.
Anyway, we came to the conclusion that we could make more rapid progress by focusing on language understanding. There are a number of disadvantages in taking that route to AGI, but there are also a lot of advantages. So, the other advantage in focusing on language is that there are immediately some very powerful commercial applications. If you can have an intelligent conversation, or at least a conversation more intelligent than dumb chatbots, obviously you have a good commercial proposition.
So, we decided to focus on natural language conversation, conversational AI. Not just one-shot things like Siri and Alexa do. Where you say, tell me the weather in Boston, but to really have an ongoing conversation that has deep understanding, has short-term memory, long-term memory, the ability to reason. That’s a very different kind of a problem to solve, to have an ongoing, meaningful conversation than to just kind of have the one shot like [00:07:00] Siri and Alexa do. That’s really why we focused on language, but ultimately our project would need to go back to encompassing vision and dexterity and so on.
Erik Martinez: We have a partner on one side of our business that does some AI chatbot work. Through the years of talking to them, they put a fair amount of human intelligence behind it in terms of providing the context for those conversations. How does your technology work? Does it work on a similar principle? What are you guys doing a little bit differently?
Peter Voss: Right. The way we usually explain it in just a few words is ours is a chatbot with a brain. All the other chatbots we are aware of, don’t have a brain. The conventional approach to chatbots is there are two components. The first component is an intent classifier. So, you say, blah, blah, blah, weather, and it basically triggers the intent of weather report.
But you can also say, I hate Uber. Don’t ever give me Uber [00:08:00] again, and it’ll probably trigger the Uber intent. Then you go through the second part. Which is some little like flowcharty type program where it’ll say, okay, where do you want to go? How many people are going? Do you want Uber x, which is a flow chart program? That’s pretty much the basis of all current chatbots other than ours is the intent classifier and the flow chart type program that executes once you’ve recognized the intent.
Now, of course, in the more sophisticated applications, there might a lot of additional work that programmers do. For example, if you identify the customer, you might do an API call to the backend system to get more information about them, you know, or their location so that you can give them the weather for where they currently are. There might even be some short-term memory that you specifically code in. The chatbot might ask, what’s your name? You give the name and then the system will store that piece of information, and then, later on, it can [00:09:00] give you back your name.
Through a lot of programming effort, you can get pretty sophisticated conversations, but it’s all human intelligence specifically solving those problems. Trying to anticipate, in the flow chart, what might people say, what pieces of information do I need to store, and then where can I maybe use that information? So, it’s all a very manual process with external intelligence.
Now, our approach, the chatbot with a brain, is we have the cognitive engine, or the brain, which manages the whole conversation. So, everything you say, it passes it deeply to understand what the meaning is given the context, given what has already been said, and it automatically learns new facts. You don’t have to program in specifically, I want to learn this piece or that piece. So, it has both short-term memory, long-term memory. It has the ability to [00:10:00] reason, to disambiguate, and all of that is a core part, an inherent part of the brain, and that, of course, makes a huge difference in the level of intelligence that you can bring to the conversation without having to specifically trying to anticipate what people might say. That’s really the fundamental difference.
Erik Martinez: So, what type of commercial applications are you using your technology in today? What are you know, some simple use cases?
Peter Voss: Yeah. So, the most obvious ones are clearly any kind of customer support where you currently have a call center. People helping individual customers for any medium to large organization. The medium to large really is just a function to do a good job of it, you do need to integrate it with the backend systems. It really needs to become a corporate brain. So, we take the core brain that we have that knows about how to hold a conversation, how to greet somebody, how to disambiguate, language understanding.
All of [00:11:00] that’s in the core brain, but then we add a layer of additional knowledge or ontology to that, that is company specific. You know, what are the company-specific business rules? What is a company-specific phrases or words that they use? Also then, the APIs to the backend system are in that additional layer that we build. So, that obviously takes some effort to do a good job of that, to understand the business well enough to be able to incorporate, well, what business rules do we need to include and so on?
So, automating call center calls in a hyper-personalized way actually ends up giving you, in many cases, a better experience than talking to human. A, you don’t have a wait time, which is one of the things. B, when you’d call in again, a second time it’s as if you are talking to the same agent and he remembers exactly what it is.
So imagine, for example, you have a problem with your wifi at home or your cable. So, you call in. First time you get [00:12:00] some advice. Well, maybe reboot the router, obviously. Okay. You’ll try that. Now you call in the second time. The system will remember, did that help? It won’t then start again and say, please try rebooting your router. Well, yes, I’ve already done that. Okay. Try moving the router into the kitchen or whatever, and it can remember that. It would be like having a dedicated technician that remembers what you’ve done already, available 24/7, and without wait time. So, that’s the kind of hyper-personalized service you can actually offer with a chatbot with a brain.
It’s not just technical service. I mean, one of our big clients is 1-800-FLOWERS, which is actually a group of about 10 different companies. There’s Harry and David, Popcorn Factory, and so on. They are using our technology to provide this personalized service across a number of channels, five different channels. So, whether it’s ABC Chat or Google Business Chat, or whether it’s via the website or mobil.[00:13:00] It recognizes who you are, knows your history, and potentially knows who you buy gifts for, for what occasions, what gifts those individual people have.
So, for example, it might be last year, you bought chocolate for your niece, and then you interact with Aigo who can say, your niece’s birthday is coming up. Do you want to buy her chocolates again? That kind of thing you can do with that chatbot.
So, you can see that’s not the kind of service you could really offer with humans in a call center. Whether it’s banking, whether it’s health, it’s completely industry agnostic. It’s conversational AI to help people, and it could be inside the company as well. Companies that have thousands of employees, it might be for HR or for tech support.
Tim Curtis: My history in the space, as we were talking before the show, it forms a foundation in the natural language processing, which is one of, you know, the early iterations really of using AI for any kind of a, primarily in a customer support or customer [00:14:00] service type realm. Through deep learning that kind of evolved into natural language understanding, which gives more of the context about what people are saying. But what you’re really talking about with the chatbot is moving into a whole different realm. You know, you’re talking about those indicators of context.
You’re really recreating, in a computer environment, in essence, the processes that our own brain uses to understand context and to reply. So, very interesting to hear what has this process taught you about intelligence and the human brain? What have you learned? You’re light years ahead of where most of the others have been in terms of a technological perspective? Just love to hear what your thought is there.
Peter Voss: Yeah. There are number of technical aspects that I could go into in terms of what I’ve learned, but at the high level, I think the biggest lesson is to solve intelligence using computers, you really need to start by understanding [00:15:00] what intelligence is, and I think that’s a great failure right now with the fantastic success of deep learning, machine learning, statistical big data approaches.
They’re very successful in doing a number of things, but they are not at all starting from a point of what does intelligence entail? What does intelligence require? It’s sort of more like, what can we do with a lot of data? That’s the hammer we’ve got, and of course the companies with a lot of money, like Google and Amazon and Microsoft, that’s the hammer they’ve got.
They’ve got massive amounts of computing power, masses amounts of data, and then what specific problem can we solve with that? Playing go, or protein folding, or targeted advertising, like Netflix, or well, just generally target, you know, worth trillions of dollars. The starting point is not, what does intelligence entail? What do we actually need to do?
And the things, that very obvious are you need [00:16:00] memory, you need to remember what people said short-term, long-term. You need to have deep understanding. It’s not good enough just to have sort of key phrases picked up and not understanding them within the context. You need reasoning ability, and that’s just simply missing from the current deep learning machine learning approaches. If you hired a personal assistant, a human, to work for you and they didn’t remember what you said three sentences ago, never mind last week.
Tim Curtis: Right.
Peter Voss: They wouldn’t last very long. You know or just latched onto some keyword. Okay, I’m gonna do this for you. No, I said, don’t buy this. Oh, I thought you said buy it.
Tim Curtis: Yeah. which is a complete frustration, but if you remember when Siri first came out, that was sort of the positioning of what Siri would be able to do for you, kind of fill almost that personal assistant role. Whereas, ultimate chatbot, wouldn’t that be the next iteration of ultimate chatbot as a personal assistant?
Peter Voss: Well, actually, Siri is an interesting case study there. So, [00:17:00] yes, that’s kind of what people assumed Siri would be able to do. I mean, Alexa sort of in the same mold. Now, the interesting thing is that the founders of Siri, when they sold Siri to Apple. Actually, Steve Jobs, I mean, this is what I’ve been told by them, was keen to develop Siri further.
What Siri could do initially were just a few things, giving you the weather, giving your reminders, and was a very short list of things that Siri could do initially. Their idea was really to develop it into this personal assistant that would remember you, have hyper-personalization, have memory, and so on, but once they sold the company to Apple, they didn’t get any support for doing that ongoing development.
It was just hooking it up to more services and multi-languages and all of that, and they all left Apple. So, yes, it was the original intent, but really the usefulness of why Siri and Alexa and Google assistant are useful is because very specific things have been [00:18:00] hardcoded by programmers to have access to your calendar, or reminders, or giving you sports scores, or whatever.
They have these many services that they can hook into, and if you know what the magic words are that you need to say. I mean, people who rave about Siri and Alexa have learned how to use them. They’ve been trained by Siri, by Alexa, essentially, you know what works, but you try to have a normal conversation with it, they won’t remember anything you say.
If you say, my cousin Tina’s birthday is coming up next week or something, I mean, there’s no way they will understand what you’re saying or learn that, or be able to use that. So, they simply haven’t been developed. Again, because people in charge of these products, well, I’m sure they didn’t start with saying what is intelligence require. It was more like, it’s very obvious with Alexa is how can we beat the competition? How can we have more and more features? You know, how can we just keep adding [00:19:00] features to Alexa, but they’re all one-shot things. Set a tea timer for me, do this, open the blinds, do that, and by having just a long list of single commands that you can give and lots of different integrations, it can become very useful and they can get ahead of what Siri can do. That’s sort of the arms raise of how many single commands and how many integrations can you have, but not how do I build something that is truly intelligent?
Tim Curtis: I’m reading a book, Tony Robbin’s Life Force. It’s a new book. It’s fascinating to me. It’s interesting that it’s paralleling a lot of what you’re saying. In here, he’s really talking about how a lot of this very same deep learning and mimicking of the brain’s patterns is being now used in a medical context. Not only to give diagnosis, but also to really attack diseases and viruses in ways that we, as humans can’t do, and I think, your context here you’re referring to and talking about obviously is certainly more of that [00:20:00] traditional business context. The underpinnings of what the ultimate chatbot sounds like it’s able to do and understand, it certainly could be beyond personal assistance. Couldn’t it?
Peter Voss: Oh, absolutely. Once you have this core intelligence, then it can be a cancer researcher. The system could learn how to do research in a particular area and could really help us solve the many problems that we have. You know, whether it’s energy pollution, or governance, or longevity, or whatever it might be. But of course, we need to get to human-level understanding so that we call our agent Aigo, and Aigo would need to have enough intelligence and common sense knowledge to kind of hit the books by himself and learn these things. We are not anywhere near that yet.
We believe we have the right architecture and approach to do that. So, I mean, if we were trying to put it on a scale, not that IQ is sort of the appropriate scale for that, but Siri is at 15, and we are at 30 or something. We’re still [00:21:00] a long way from human level. So, we keep chipping away at increasing the ability for the system to understand complex things, to learn them, and to reason about them.
Erik Martinez: Peter, let’s just talk a little bit about what AI is and isn’t. You’re talking about real intelligence being able to perform tasks that humans can currently perform because of the way we think and learn and are able to bring in abstract data. In today’s world, what is AI? Cause I think there’s a lot of misunderstanding about what it is and what it isn’t, and there’s a lot of businesses out there touting AI-driven solutions. I’ve read a few of your articles where you talk about the different types of AI, right? The good old-fashioned AI and you know, machine learning. Could you just dive a little bit into those definitions and then say, this is kind of the state of where we are today and this is what you can expect over the next decade?[00:22:00]
Peter Voss: Yeah. I’ve already mentioned that the original idea when the term was coined was to build thinking machines, but that turned out to be really hard. So, what happened is AI morphed into narrow AI, solving one particular problem at a time or one set of problems at a time. The important difference there is that it’s really the external intelligence, the intelligence of the programmer or the data scientist that figures it out.
So, whether it’s protein folding, or any number of things that are being done now, autonomous cause, image recognition, it’s really the intelligence of the data scientist or programmer that figures out how do I need to tag the data, what data do I need to use, what architecture do I need to do in the experiment and tweak parameters and so on until they get it right? It’s using advanced computing technology to solve a particular problem, and that’s what’s called AI. That’s just where the word has gone to.
You [00:23:00] know, I remember that 15 years ago, AI was actually a swear word. You know you didn’t want to call your company AI because oh, AI doesn’t work. It’s failed. You know, we’ve tried so many times. Then about nine, ten years ago, deep learning, machine learning, had this breakthrough of using masses amounts of data, masses amounts of computing power and it started doing really useful things. It started improving speech recognition and image recognition and so on.
Suddenly, billions of dollars flowed into this area. People started renaming their companies .ai because suddenly that’s the way you could get funding and attract smart people, and we in that area now, I think 10 years, AI has been hijacked by deep learning, machine learning, and kind of the only game in town. It’s really the only thing people know. We have a whole generation of people being trained on AI that only know about machine learning, deep learning. They don’t even know that there can be anything else. That’s the one distinction between [00:24:00] general AI, or what we call AGI, and narrow AI. That’s one distinction.
DARPA also had an interesting taxonomy of AI that they published a few years ago and they call it the three waves of AI. The first wave is what’s also called good old-fashioned AI. So, that would be like traditional logic approaches. Deep Blue, the chess champion, was a good example of that, expert systems, basically, logic-based systems and that dominated AI for many decades. So, that’s the first wave of AI.
The second wave is basically this deep learning, machine learning, or neural network approach statistical systems that hit us like a tsunami. The second wave. It’s been incredibly successful in the areas where it’s appropriate.
The third wave is really much more like AGI. It’s reasoning thinking systems or cognitive [00:25:00] architectures. It’s basically having some system, some engine that does the reasoning and learning, and so on that is adaptive. So, there are a whole lot of attributes that are much more like the way our brain works. Rather than throwing massive amounts of data at it, number crunching it, and building a model, it’s really dynamically digesting the information.
And a good example there, to concretize it, would be you can show a child, a single picture of a giraffe and they’ll be able to recognize a giraffe from the front, the back, upside down, pink, whatever. In those deep learning, machine learning, you need hundreds or thousands of images of giraffes for it to be able to recognize it, and then it probably still wouldn’t recognize it if you showed it a pink giraffe. So, that is sort of the third wave of AI is the cognitive approach, concept formation, the ability to abstract, to reason about things. I think that’s a good way of making the distinction between the different things.
Current [00:26:00] chatbots basically use first wave and second wave technology. Second wave technology is for intent categorization. So, blah, blah, blah, weather. Okay, you want to know the weather and then kind of using first wave for executing this flow chart type program where there might be some logic involved. AI, it’s really become a marketing term, and you can’t really fight that, and that’s a reason why we coined a new term for it, AGI. Rather than trying to persuade the world that what we’re doing is not actually AI.
Erik Martinez: So, I was reading an article. I think, published by your co-founder, and I’m not even gonna try to pronounce his name.
Peter Voss: Srini Pagidyala.
Erik Martinez: Yes. So, I was reading the article that he was talking about, I’m kind of trying to come back to the commercialization, right? What’s in it for me? Why do I need to know this? The thrust of his article, which was written last November, is really about engaging experiences and the customer experience. Can you just [00:27:00] tell us a little bit about your philosophy behind that concept of customer experience? You’ve touched on it a couple times earlier. Real-time response, being able to remember things, but what are the other applications that our audience can think about as they’re thinking about the future and their businesses and considering different technology choices. What do you think is most important in that, and how does that play into the customer experience?
Peter Voss: You have basically the different levels of engagement that you have with customers. You have where a customer has to contact you to get something done. That’s kind of our experience usually with banks and telcos and so on. We don’t really want to deal with them, but we have to in order to do that. So there, it’s moving from an awful experience, long wait times, you’re not getting answers, you have to repeat yourself. You call into your bank, you give all your details and they say, I have to transfer you and you wait half an hour, you [00:28:00] get transferred and you start all over again. That kind of thing.
So, the first level is basically to just make that experience that you need to have to get stuff done, to make it much more pleasant. That’s kind of the first level of what you’re trying to achieve, but then you want to hyper-personalize it so that you actually delight the customer by saying, oh wow, I didn’t have to repeat myself. I remembered that I called in last week. You know, or it knows that I placed an order and the order is late, so obviously that’s what I’m calling about. Or I got a text message from my bank or whatever it might be. So, that is basically the hyper-personalization that kind of would be the next level.
Ultimately, the engagement is that you get to a point where you want to interact with that concierge service, with that personal assistant. Where it’s so helpful that you actually reach out and say, hey, can you do this for me, or you get automatic reminders for things. [00:29:00] So, that’s clearly where companies would like to end up, where the customer repeatedly wants to engage with a company. They don’t want to look anywhere else because it’s fulfilling whatever scope it can fulfill.
I think that now a lot of companies have to just even move past the awful experience that they’re providing. Then you go to hyper-personalization and to a point where people actually want to engage and you can start offering them more and more services because they’re getting value from it, and it’s a pleasant experience.
Tim Curtis: Which is everything when you’re talking about customer engagement and customer experience, that context that drives that. I’m planning on attending MAICON, the marketing artificial intelligence conference In Cleveland in just a couple weeks. Erik may be going with me. I guess it’s the AGI concept that’s making its way now into marketing. Outside of just the customer support or customer experience realm, but now using that technology to solve for more granular segmentation for marketing campaigns, using [00:30:00] AI to refine scoring on scoring models that may have been artificially set up by humans with numeric value based upon some research category they decided to apply. When you begin to break outside of the thinking pattern that you’re solving for very narrow areas, when you begin to really understand what can be solved for in a marketing context, I think it really opens your eyes to what that future of marketing is. Have you had any thoughts or glimpses into what you envision that future to be?
Peter Voss: Oh yeah, absolutely. You’re talking about statistical approaches that make smaller and smaller groups that are better defined, that you know more about people and that can be more targeted, but we still think that’s the wrong approach. You don’t want people to be a number. You want people to be an individual. So, you want to be marketing to the individual, a demographic of one. That is ideally what you want, but deep learning, machine learning doesn’t have [00:31:00] that mindset, doesn’t have the tools to do that. It’s just how can we be more targeted, but it’s still a shotgun approach. You know, it’s sort of just more focused, but it’s still you fit this profile statistically. So, we’ll blast this at you. What you really want to do is to target the individual.
In fact, it’s interesting, when the CEO of 1-800-FLOWERS spoke to us. He actually found out through a podcast as well about this, and they contacted us. He told the story that they’ve been looking for this concierge-type service, and they actually tried to implement it with other chatbot technology and spent millions of dollars and couldn’t solve the problem.
His motivation was when they started 1-800-FLOWERS, it was one shop. They knew all of their customers and they could give this hyper-personalized service to individuals. As the company grew, I mean, they now have more than 10 million customers. Of course, they can’t do that anymore. They want to try and get back [00:32:00] to that as if it was your corner flower shop that knows each customer personally. Whether they have kids or what they buy, what their wife likes, and so on. So, that was their motivation in marketing to the individual.
Tim Curtis: When I look at the hyper-personalization you’re talking about. I’ve been pulled into a couple of these discussions. As people have started branching into, knowingly or unknowingly, moving more into that artificial general intelligence where you’re really emulating the brain. That’s where all of a sudden the conversation begins to change a little bit. I want to kind of pivot just a moment to talk about what I’m seeing.
So, I chair a privacy committee in Washington DC through the ACMA, and one of the things that we’re looking at is all of the state legislation that’s popping up. California, obviously, California Privacy Rights Act is the latest iteration out of California that really is driving the legislation across the country, including conversations we’re having at a federal level. That being said, the atmosphere now surrounding ethical [00:33:00] questions related to this increasingly humanistic element of artificial intelligence, you know, begins to really raise those concerns, and now I think we’re beginning to try to have conversations, to define, what are the limits? What are we comfortable with in terms of the next iteration of artificial intelligence. You have to be involved in that, right? That has to be a part of what you guys are having conversations about.
Peter Voss: Right. So, my own view of that is it’s unfortunate that governments tend to have the approach of one size fits all, and they tend to react to some bad press. You know, obviously, whatever will get them votes, whatever will make them popular. So, I mean, like we’re seeing in Europe. They’re really these extremely restrictive privacy things that prevent you actually from a hyper-personalized service in many cases. Or the awful law we have now where every website that pops up, you have to click the, I agree, or don’t email me. I mean, like what’s the point? You know, who’s ever gonna click on [00:34:00] the more information or more options. It doesn’t achieve anything other than to annoy people. There’s still so many things that are carved out and, you know, exclusions and whatever.
So, I’m personally a very strong believer in personal responsibility and personal authority. So to me, the best would be you need honesty, obviously. Companies need to be honest about what they’re doing. One of the things I also strongly believe in is you should never have a chatbot or an IVR that, you know, you can’t easily find out whether it’s automation or not, and some of our customers try and push us into that and say we want the thing to just be like a human.
You know, it greets you with a random human name and if somebody asks, are you human? They may even not want us to say no, I’m a bot. I don’t like that at all. People should be able to individualize what they want. I mean, if you want this hyper-personalized service. Maybe you want to tell them about your family and what gifts they like and what they don’t like because it’ll make your life easier. [00:35:00] Or maybe you don’t, but that you would easily have the choice to basically say what you share and what you don’t share, and for systems to be straightforward and honest about those things. That’s my opinion, but I think laws try to just one size fits all and there are always unintended consequences, you know, with these laws.
Tim Curtis: Absolutely. I think you’re right. The unintended consequences of what we’re looking at right now is the ad tech industry has largely gone the direction of leveraging AI, at different levels into its targeting practices, and as a result of this legislation coming in into play and the shutting down the third-party cookies, which are more coming from the atmosphere of privacy concerns, really more so than legislation, but what’s really happening is all of that rich data that has been used for hyper-personalization, that spigot is being turned off.
Peter Voss: Right.
Tim Curtis: And [00:36:00] so, we’re going to be setting AI-capable ad tech back in terms of its effectiveness. That is something that I would say CEOs and chief marketing officers have not figured out. They do not realize that’s going to hit them. That’s one of those consequences.
Primarily, it started with GDPR in the EU, which as you said, Peter, largely ineffective, and then the myriad of state laws popping up here in the states are having a very similar effect. So, no question, that’s going to be a consequence for the industry. Do we care about privacy? Do we care that we, you know, regulate? Yes. We understand the data breaches and making sure that personal information is locked down, but there has been no conversation about the effects of those changes. So, you are spot on in your assessment there, in my opinion.
Peter Voss: Now, if we are looking at the longer term and maybe I shouldn’t be saying this because, you know, we are obviously marketing to large enterprise customers to use our chatbot, [00:37:00] but I actually see the longer-term future that instead of you interacting with a chatbot with your bank, your insurance company, and with your retailer, Amazon or whatever, that you will have, what we call a personal assistant.
I’ll explain why I call it a personal assistant, but you’ll have your personal assistant that deals with these other chatbots. More likely it’ll actually deal just with APIs. Now, in that future, you are totally in control with who you talk to, what you share with whom, through your own personal chatbot. Let me explain why I call it a personal assistant. It should actually be called personal assistant because there are three different meanings of the word personal that come into play here.
The first one is personal. It’s yours, your own it. It serves your agenda. It doesn’t serve the agenda of Apple or Amazon or some other company. So, personally, it’s yours. You own it. The second personal is it’s [00:38:00] hyper-personalized to you. It’s customized to you. Through interactions with it. You personalize it to what it knows about you and so on.
And the third personal is the privacy issue. It’s personal as in private. You decide what you share with whom. I see the longer-term future of personal assistant, and not actually having the company’s control or try and push things to you. That you’d have a gatekeeper essentially, but gatekeeper, maybe it’s the wrong word because gatekeeper gives the impression that you only keep stuff out. It would be at the same time, also an explorer for you. That it would go out and find things for you that you might not otherwise find.
Erik Martinez: Sign me up. I need that right now.
Peter Voss: Whenever I tell the story, that’s a consistent reaction I get. We just need an extra a hundred million dollars or so, because to break into the consumer market is obviously not easy, but our technology lends itself exactly to that. You know, at the moment people expect to have this personal [00:39:00] assistant for free. Well, it’s not actually free, they’re selling their souls, but we would need to convince people that you should be paying 10 or 20 or $30 or whatever a month, depending on what level of support you get to have this personal assistant. I think people would be very happy to pay for that if it really is their own and it serves your agenda and not some mega corporation’s. In our company philosophy, we truly would never sell the information. That’s not the business model. Whereas everybody else’s business model is selling information or using it just yourself.
Erik Martinez: Well, sign me up. I will gladly be a beta cause that sounds fascinating. This whole AI and AGI conversation has been very interesting. For our audience, if you’re getting into this, where are the best places to learn more information, get better educated on the choices that you can make about the types of technology that you’re using? Whether [00:40:00] it’s for marketing or customer service or customer engagement. All those things are starting to blend together anyways. We’re definitely in a society where everything’s interconnected. If somebody said, man, this is really intriguing and I want to learn more, where do I start?
Peter Voss: That’s tricky because as I mentioned earlier, deep learning, machine learning is so dominating the field. It sort of sucked the oxygen out of the air and to really find out about other approaches like the third wave of AI, more cognitive approaches, it’s really hard.
I’ll give you one particular anecdote there. We had a brilliant intern from Germany work on our project, and he really understood the whole cognitive approach. He went back to Germany to do his Ph.D. He couldn’t find a sponsor for that. So, he ended up doing deep learning, machine learning, and he’s lost to it now. That’s his expertise. That’s what he’s gonna be spending his life doing. It’s really hard to get knowledge outside of machine learning and deep learning, and that to me is relatively boring.
Everybody’s doing it, and they’re making little [00:41:00] improvements, and I mean, some of them are clearly very impressive, but there’s no real intelligence there. So, from a commercial point of view, what technology to utilize. As you said, the AI label, everything, people stick AI label on it. Well, you obviously have to just ignore that and say, is the technology or the tool that I’m considering buying or integrating, is it likely to solve a meaningful problem?
Too many tech departments and companies just fall in love with technology. GPT-3 comes out. Wow. Look at what interesting conversations it can have. The fact that they’re gobbledygook and you can’t rely on them. It’s just, wow, this is the latest technology, we’ve gotta play with it. Millions are spent on playing with it, but is it solving a problem? Is it just the next shiny new object? So, I think it’s really having those fundamental solid business decisions of what are you trying to achieve or are you just falling in love with some new trend?
It’s hard. It’s hard to get good [00:42:00] advice. Big companies tend to be very good at marketing. I mean like, IBM cognitive computing, they’ve hijacked the word cognitive. Now, you think cognitive computing would be intelligent. Well, it’s not. They’re just API services basic, but it’s kind of destroyed the word cognitive. So, it’s hard. It’s hard to navigate that landscape. I’ve written a few articles on that. I know Gary Marcus for one has written quite a bit on the limitations of deep learning, machine learning. Of course, machine learning is good for a lot of things. If statistical approaches are the right way to go, but they’re not gonna give you that intelligent interaction. If that’s what you’re looking for.
Tim Curtis: One place I have gone to kinda learn a little bit more about this is on your own website at Aigo, Aigo.ai under the resources section there’s articles, there’s podcasts. So, there’s a couple of things there that if you’re looking to get a little bit of a flavor about what real cognitive is about, that’s a great [00:43:00] place, I think, to go, cause you know, you’re getting it from the source.
Peter Voss: Just to elaborate a little bit on the usefulness of deep learning, machine learning. I think quite a few people have done some more sober analysis and said for the applications that do seem successful for deep learning, machine learning, how many of them actually needed machine learning, deep learning? Whereas, you could have just used more traditional statistical regression techniques that are much cheaper and easier to implement.
So, often it can do the job, but it’s overkill. Then the problem you have, the people you hire in your company probably don’t wanna work on boring, statistical regression. They want to work on deep learning, machine learning. So, I think it’s that sort of sober assessment of what do you really need? I think that’s a mistake a lot of companies make that its overkill. Again, they fall in love with the shiny new object.
Erik Martinez: One more question. I was listening to your distinction between [00:44:00] cognitive learning and deep learning and looking at some of your articles, and I think one of the important things that I learned in this conversation is that the technology that you are working on really has the ability to adapt and learn. You go into a client like 800-flowers, and I know there’s, the technical heavy lifting and connecting databases and all those things, but how does that learning process work when you implement your technology? You know, deep learning, it’s like throw lots of data at it. How does that work with your technology?
Peter Voss: Yeah, that’s a very good question. So, the fundamental distinction is we are looking at not a lot of data, but quality data. So, the important thing is, like you mentioned 1-800-FLOWERS. So, we need basically the ontologies. What are the terms that are specific to this company or to this application? Now, the product list, obviously, the product [00:45:00] names would be something, but there might also be other phrases that people would commonly use when they talk about different life events and so on. So, we need to make sure that these phrases are trained, that the system knows about them, that they might not be commonly used.
So, that is, again, we start with a core of common knowledge that is shared by all applications that we do, and then we add to that the ontology and business rules and API connections for that particular application. Adding those ontologies is not usually a huge task. As I say, it’s not the quantity of data that matters, it’s a quality. Usually, you can get that. The companies have that in their databases or in their training material or documentation or whatever. Now, we don’t do any automatic learning because you can never trust big data. You know, garbage in, garbage out.
Our [00:46:00] customers basically will want to sign off on the knowledge we do incorporate in the system, they want to sign off from a legal department. You know, if there are FAQs, how do we answer a particular question? So, it’s a legal department, their marketing department, and their customer experience department. They all need to sign off on the knowledge we actually incorporate into the brain.
Now, if you have a personal assistant, of course, that’s totally open-ended. You should be able to talk about anything and teach it anything that you want, but also if you teach at garbage, it’s not gonna affect anybody else. You know, if you want to tell the system you’ve got 20 kids or something. It’ll believe you, and tell me, what are names of your 20 kids. You know, and then you can play games and whatever confuse Aigo.
Erik Martinez: So, just out of curiosity what does an onboarding period look like? Is that a year-long process? Six-month process? Can it happen as quickly as 30 days?
Peter Voss: So, we’ve onboarded in as quickly as over a weekend. We had some [00:47:00] emergency requirement where there was, you know, emergency request where they literally needed a new service by Monday. So, we can put together something very quickly, depending obviously on the application, but more realistically, we’re talking about six to eight weeks to put everything together.
However, when we are talking about dealing with large companies, add another six months to that to actually get the APIs exposed, to go through security reviews, to go through whatever, just delays inherent in the company. So, practically speaking, we are always waiting for the customer, almost always waiting for the customer, to go through some additional process that they need to go through for the project.
So, if everything is available for a moderately complex system, we can talk about six to eight weeks from start to going live. Practically speaking also, you almost always start with one particular application, and then [00:48:00] expand. So, you get all the basics sorted out, what we call the corporate brain. Which is basically the additional information, which would be API integration, the business rules, and so on.
1-800-FLOWERS we started with one of their brands, one channel, and one application. You know, it’s now 10 brands, five channels, and about 50 applications. Obviously, the incremental ones become easier and easier.
Erik Martinez: Sure. That makes sense. Well, Peter, thank you so much for this conversation. I learned a lot in our discussion. If somebody wanted to reach out and connect with you, what’s the best way to do that?
Peter Voss: Well, the website. You could also reach me directly on LinkedIn, on Facebook, or just email me, [email protected]. So, very easy to get hold of me in different channels.
Erik Martinez: Well, great. Well, thank you so much for joining us today. I’m Erik Martinez from Blue Tangerine.
Tim Curtis: And I’m Tim Curtis from CohereOne.