Explaining AI through human nature

Published February 23, 2022   |   

If humans don’t understand emotions, how can machines, wonders Alok Aggarwal. Our most recent guest on Slaves to the Algo says, “Even humans have a hard time learning human emotions. Somebody may be smiling but be thinking of something different. We probably don’t even understand ourselves. Forget about understanding others. If [machines] were right, I think the number of divorces, the number of issues that humans have with humans would probably go down dramatically!” 

Alok is the Chairman and CEO  of Scry Analytics, a company that provides AI-based products, solutions and services across industries. He has 40+ years of experience – in which he founded the IBM India Research Laboratory and co-founded Evalueserve (providing research and analytics services and pioneering the concept of KPO). Alok has also added the role of ‘author’ to his list of achievements. In Hundred Years of Artificial Intelligence – Past, Present and Near Future, he explores the rise, fall and return of trends in AI 

Alok and Suresh also talk about the future of AI and how it will impact or disrupt various industries! Check out the full conversation below.  

About Slaves to the Algo  

Whether we know it or not, like it or not, our lives have been taken over by algorithms. Join two-time entrepreneur and AI evangelist Suresh Shankar, as he talks to leading experts in various fields to understand how they are using or being used by algorithms in their personal and professional lives. Each episode highlights how businesses can leverage the power of data in their strategy to stay relevant in this new age of AI. Slaves to the Algo is brought to you by Crayon Data, a Singapore-based AI and big-data startup.  

Suresh Shankar is the founder and CEO of Crayon Data, a leading AI and big data start-up based in Singapore. Crayon Data’s flagship platform, maya.ai, is the AI platform powering the age of relevance.  

How to listen to our podcast  

Apple Podcasts  
Spotify  

Google Podcasts  
YouTube  

Full transcript of the episode below: 

Suresh Shankar  00:07 

Hello viewers and listeners. Welcome back to another episode of slaves to the algo. I’m Suresh Shanker, founder and CEO of crayon data, AI and big data company podcaster and host of slaves to the algo. slaves to the algo is my attempt to demystify the age of the algorithm. I plan to share my learnings, those of leading experts in the field, professionals who have delved deep into the subject, understand how they’re using and how they’re being used by algorithms in both their personal and their professional lives. Today, I’m particularly delighted to have Dr. Alok Agarwal, Aloka, CEO of stray analytics, a company that provides AI based products, solutions and services across industries. But Alok is a very interesting person because for me, he’s personally a geek, a computer scientist and algo person for a very long time. For decades. In fact, he founded he started his career. He did his PhD in electrical engineering and computer science. He worked at the IBM Watson Research Center when IBM Watson was literally the fount of so many patents in the world. He set up the AI Lab for IBM, India, the research laboratory, he co-founded one of the world’s first data led research firms he values. And he currently is the Chairman CEO and chief data scientist at Scry analytics. But one of the reasons I look here today is that he has added the role of author to his list of achievements he has written or is writing and will soon be launching a book called 100 years of artificial intelligence. You’re right, that’s 100. It’s not new, 100 years of artificial intelligence, past, present, and near future. And I’ve had the privilege to get an advance copy of the book and read some of his stuff. And he explores the phrase, the fall, the return of trends. And so I’m really delighted to have a trailblazer in AI and analytics on slaves, the algo, to the algo, to the slaves to the algo. Today, welcome to the show. Hello, thank you for your time. 

Alok Aggarwal  02:15 

Thank you for having me, Suresh really delighted to be here. 

Suresh Shankar  02:19 

Alok, I know you’re a professional, but I always like to start my episode by asking guests a slightly more personal question. I mean, we are professionals. We do work for companies, we teach, we mentor people, we write books as you are. But we are also affected as individuals in by the development of AI. And if you look at your own life, and you’ve seen this in 40 years, can you just share with you some examples of what I would say some great algorithms that you come across, and how it’s impacted and whether it worries you whether it affects your life positively or negatively? I mean, the Google search algorithm box is one of those great ones. But you know, everybody knows about that, but any others that you think are really impacting life today. So thanks, 

Alok Aggarwal  03:00 

Right. So I think deep learning networks is probably, in my view, one of those algorithms or networks or systems, that is just amazing. And people often don’t realize they artificial neural networks were invented in 1950s. Put to put to test is perceptrons. Deep learning networks were not that much behind in Soviet Union, Professor Eva Hanko. And Lapa actually gave the wrote the first paper on deep learning networks in 1965. And they also actually created a deep learning network theoretically, with eight layers in 1971, and taught how to how to train them. So it goes as back. I mean, people often think this is something new, but actually it’s not. And why it’s so interesting is that by themselves, the theory was there, at least the beginnings of the theory was there. But we couldn’t really make it make them practical because they because most law was in effect at that time, semiconductors were just coming out. Most of the work in 1965 was done by using vowels, which were like buds in 1945. I during Second World War and continued until 1965. And building these networks and training them were was prohibitively expensive. In fact, building a network with 86 billion neurons of the size of a human of a human would have cost the entire GDP of United States in 1976.  So, and most lawyers, I mean, although I like to concentrate on algorithms, you cannot actually separate them out from what was happening around and Moore’s Law has basically reduced the cost of computing and of storage by a factor of a million and suddenly you have something which you can really cream very quickly and you can really use very effectively and deep learning networks are have become ubiquitous. 

Suresh Shankar  05:04 

So is the ionic network your favorite algorithm? 

Alok Aggarwal  05:08 

Absolutely.  

Suresh Shankar  05:13 

Could you use one example of somewhere where the deep learning effect is a deep learning network is affecting us as let’s say consumers, or as business professionals. 

Alok Aggarwal  05:21 

So computer vision is probably the biggest one. I mean, pretty much all computer vision algorithms have underlying as deep learning networks. If you go into more details, it’s really affecting autonomous car driving, it will affect natural language processing very soon, I mean, large number of deep learning networks have come out what we call in the book as gigantic networks have come out. And given the amount of data that can be used for training, it will affect pretty much everything that we think of in in terms of vision, audio, speech to text, and, and of course, natural language processing in generation. 

Suresh Shankar  06:09 

That brings me to I think, your achievements and also what you’re working on your past, your present and your future. And I’m really going to start with the future because the very near future is your book. And I love you know, I spent, you know, a little bit of time over Christmas and New Year going through all the chapters. I know you’re not completely done. What’s very interesting for me is when I opened it is that and you know, people tend to forget, right? I mean, you just mentioned it, people everybody thinks that all this is new, I invented AI and you know, all youngsters seem to think that this is what it is. But you dedicated the memory of Alan Turing, Marvin Minsky and MC John McCarthy, who literally created or created the basic science behind this like No, seven decades ago. And you know, I remember Alan Turing in 15 minutes, we’ll talk about that. He says, Can machine can a machine imitate human intelligence and 61, Minsky wrote within our lifetimes, machines may surpass us in general intelligence. You’ve seen the revolution, you’ve been in it for 40 years. When you look at that, I think I predicted a chess machine would beat computers way back in 10 years, and that was in 1958. It took 40. So what do you think? How far do you think we progressed on the soul? Kind of prognosis that Turing and Minsky and McCarthy had? 

Alok Aggarwal  07:38 

Well, in terms of prognosis, I mean, I would like to say machines have done well, in 50-60 areas, again, deep learning networks, other algorithms have done with lots of data. And the fact that most law has been instrumental in making computing and storage cheap, has done very well in that respect. But if you look at what Turing was thinking about what Minsky was thinking about, and Minsky, by the way, was an advisor to the well known movie 2001, Space Odyssey when you have Hal 9000, which is the artificial general, intelligent computer, which is acts like a human has emotions like humans. If you think from that perspective, oh, sorry, if you think from that perspective, 

Suresh Shankar  08:27 

a lot, don’t you just love that line in that movie where the computer says that must be human error? 

Alok Aggarwal  08:31 

That’s right. And I write that in the second chapter of the book. I mean, we say that must be attributable to human error. And so it’s a very, I mean, for people who are interested in looking at the history of AI, if that movie is worth looking, I mean, definitely worth seeing, at least once, if not a couple of times. And particularly because Minsky was an advisor to the movie, and that makes it even more interesting, because that was the thought process. During had. He gave The Imitation Game, as you said, and in 1952. In a BBC interview, he actually said that he believed that by the year 2007, out of 10 Jurors would be fooled by the computer. So he actually believe that 2000 this would happen. An artificial general intelligent computer would 

Suresh Shankar  09:20 

and when do you think is going to happen? Is it 2030? Is it already 2040? 

Alok Aggarwal  09:25 

So this is very interesting question. So these guys and that’s why I credit them a lot. And I actually, the first few chapters are written in their memory, because I think they were geniuses and pioneers, but they had it wrong. And I mean, that’s the part of hype and getting all all caught up in it. So these guys were really touring, unfortunately, passed away in early 50s. But McCarthy and Minsky continued. And then of course, they would have massive discussions at MIT with social scientists and others and who said Look, this is not possible because humans have emotions and to have emotions, you have to live like a human. So and then the hype was there in 1960s 50s and 60s, and it went down in 1970s. And a very interesting article came out in 1977. In New York Times now, a newspaper, that reporter asked John McCarthy, when do you think because by that time, the hype had gone bust? And he asked a very interesting question. So sir, when do you think artificial general intelligence would occur? And his answer was equally D. He says, perhaps in five years, perhaps in 500 years, what you need is 1.7, Einstein’s and point three Manhattan projects. So he realized by that time, the gravity of the situation that you really need, I mean, seriously, you need paradigm shifts, like Einstein created a massive paradigm shift to what Newtonian physics was. And he pointed out that it is 1.7. Einstein’s. And, by the way, point Manhattan Project was the largest project of its time, set by the US government in 1930s, late 1930s, early 1940s, which created the atomic bomb, and it was at that time for worth about $2 billion, the largest government project ever funded, worth about 25 billion if you take inflation or 30 billion if you take inflation into account. So point three Manhattan projects to me seems like on the low side, because that’s $9 billion dollars today. It just for research alone, even that for the world is is a is a relatively low number, probably, it may be more like three or four times Manhattan Project. The more important part is the paradigm shift, the 1.7 Einsteins, which we don’t have any anywhere close. So yes, deep learning networks and other algorithms and data would help us in many, many places where we need help, where they will make us more efficient as humans, but getting to artificial general intelligence, I don’t see it happening in anytime in the near future, because I don’t see even going anywhere. 

Suresh Shankar  12:08 

And I’m going to come to that I think you made two or three interesting points. And I just want to go a little bit deeper on them. Right. And interesting point about, you know, they were wrong. But you know, perhaps we should look at it differently during and Minsky weren’t wrong in terms of the direction of the scale, possibly they were a bit off on the timing. And which brings me to the other point that you made, I don’t even know that they were often the timing it just that for AI to work, you need a lot of component technologies to come into place, you know, the cloud, the chip, you know, like Moore’s law, all of these things, and perhaps it is those physical laws, the things that are still rooted in the physical world that have perhaps, meant that it’s not 40 years from when Minsky said something, but maybe it’ll be 70 or 80, to get there. So that that’s to me a big takeaway from what you’re saying that perhaps these people have been even even imagined to envision this in 1950 when there was the IBM 360 wasn’t a thing then. 

Alok Aggarwal  13:01 

Yeah, absolutely. And that’s exactly why I credit I mean, I call them the three fathers of AI. Because imagination See, knowledge is extremely important, right? We all strive for knowledge, we read books, we go to classes, we continue generating and learning via books and other means all the knowledge but what trumps everything is imagination. I mean, imagination, always Trump’s things, right. I mean, Einstein came up with the new ideas, that was his imagination. So imagination, Trump’s knowledge and these three people were imaginative to even think about it, in those terms, is interesting to me. Now, whether Artificial General Intelligence will happen in 40 years, 400 years, as or 500 years as, as McCarthy said, I don’t know you placing 

Suresh Shankar  13:48 

your bets on this or look, 

Alok Aggarwal  13:50 

my My bet is probably doesn’t matter whether it’s five or 500, I think we should look at it from from the from purely from a selfish perspective of being humans that we want, machines, AI, or whatever we want to call it to work for us. And we already are in that stage, when they are already helping us improve. Many of the things that do we do, many of the menial tasks that we do will be taken away, which will free up people to do other interesting things, whether it’s art, whether it’s surfing, whatever. So it will open up many other areas. And it will see, as I write in my book, it will be very much like the industrial revolution when first steam engines and the first revolution became pervasive by by mid 1800s, late 1800s. And then, of course, was the case with motors. I mean, today motors. I mean, there is a motor running in my laptop. We don’t even think about it. I think AI will be pervasive by 2049. And that’s partly what the five the 100 years are from 1950 to 2000 49 So when we’ll come back, 

Suresh Shankar  15:04 

I’m going to come back to some of those themes. But you mentioned something very interesting to me also. And you said people have been saying, Oh, but human, a human beings are emotions and, and, you know, machines will not be able to do some of these things when these pioneers talked about it. Do you think that today, we are actually starting to see the manipulation of emotions, you know, and we start to see it with some of the areas that people are using AI and do it? Do you think that machines will learn to read human emotions? If you read, and I know your average reader, you talk about Ghazal, he talks about artificial friends, and things like that. Do you believe that machines will also learn human emotions? 

Alok Aggarwal  15:43 

I don’t know. I mean, I personally think even humans have very hard learning human emotions. I mean, somebody may be smiling and may be thinking something different. So I think I mean, I was having this discussion with another person. I think a lot of it is our human biases coming from a particular country. I mean, it’s well known now that when Indians say yes, they typically not as if they were saying no, and then they would say, yes. I mean, how does a human who doesn’t know the context figure out that? Okay, this guy is an Indian. He’s saying yes, and nodding in from left to right, rather than from top to bottom. Right. So my doubts about machines? I mean, if they are so wrong about emotions, I mean, if you were right, I think the number of divorces, the number of issues that humans have with humans would probably go down so dramatically, we probably don’t even understand ourselves. Forget about understanding others. So having machines will be 

Suresh Shankar  16:38 

the single best thing I’ve heard on my podcast that humans don’t understand emotions, how to expect machines to 

Alok Aggarwal  16:44 

I mean, isn’t that a fact? I mean? Don’t husbands and wives say it to each other? Especially Wi Fi would say, I mean, nothing against them that you don’t understand. You’re not listening to me. You’re hearing me but not listening to me. So I mean, if that’s the case, I mean, now you put in something which is made of semiconductors, not even carbon, hydrogen, and you want that to understand I, I tend to agree with the social scientists of the 1960s, when they did not even want to talk to Minsky and, and McCarthy at MIT, it’s a well known old story that they will not even sit at the same table. They used to fight so much on this issue, and then maybe some tweaks to it to what social scientists were saying at that time. 

Suresh Shankar  17:28 

That is absolutely. But moving on a little bit. A look, I think into another area, right. In your book, I think one of the most interesting chapters to me is that you talked about 14 different subfields in AI, rules based systems, NLP machine learning speech recognition. And it strikes me as a as a practicing professional, I’m not anywhere near the academically, deeply knowledgeable person that you are, that most of what passes today for AI is simply a rule based system that has been codified by a human being with their best knowledge of what they think the role should be. So would you tend to agree with that? Do you think, you know, systems are moving beyond rule base? Where do you think, what are the other subfields that you think are going to be I think the de rigueur in the next, if I take the next two or three years, which ones do you think 

Alok Aggarwal  18:19 

so. So I agree with you completely, I think 80% Of the companies probably even more, we were doing some survey very recently, who call themselves AI based companies actually used expert systems or rule based systems, nothing against that, that’s part of AI, we should not take it in any wrong way. In fact, that was the second height that got created in 1980s, and created a second winter because it created a hype that rule based systems or expert systems would effectively be the next way to create artificial general intelligence. And soon that flopped, also, I mean, it was a small hype, and it was a small bust. But I remember very distinctly coming out of my PhD that I was told, Don’t sell yourself as an expert, because you won’t get a job because the hype and the secondary hype was almost going bust 

Suresh Shankar  19:09 

isn’t just reverse today, even if you sell yourself. 

Alok Aggarwal  19:12 

Yeah, exactly. That’s one of hype and buzz, right? I mean, the your boom and bust. So you have to figure out which side of the cycle you are on, and you, you present yourself accordingly. So and maybe nothing wrong. I think part of the whole thing we have to realize, and we often forget during hype cycles, especially that it takes time for technology to seep into human lives. It is not an instant thing. We think that okay, autonomous car driving. Google is running its show and others are running their shows in in Silicon Valley. They’re training these cars, you see some of these cars being trained on the roads and highways and you say, okay, they will be here in next two years. The likelihood is they won’t be here in the next 10 to 15 years, because it takes a long time. To see for various reasons, rules, regulations, how humans perceive it, and so on. And from that perspective, I mean, it’s not surprising that something which was there in 1990s, in 20, by 2010, became the commonplace thing. My feeling is a lot of the stuff that’s happening now will become commonplace in five to eight years from now, not necessarily autonomous car driving, but some of the aspects that I was talking about computer vision, for example, radiologists and, and cancer specialists may use vision systems to detect which portion of skin cancer is skin cancer is read is actually cancerous or, or is benign, which portion of the of the mole is benign versus now versus cancers, I think some of these things will begin to gradually see down I think, dentists will begin to figure out or at least take, I mean, it won’t be that the AI will be making all the decisions, but it will be a decision helping system. I mean, like recommendation systems that Okay, looks, X rays show this, maybe he has a tooth decay problem here. And that’s a decision helping system. 

Suresh Shankar  21:16 

That’s very interesting, because I think is going on to something a lot that I wanted to get to the last year, we had actually, a couple of different people on it. And one of them was a doctor is a medically trained doctor was also studied information systems and artificial intelligence and an MBA very interesting man. And he talked about the idea of augmented intelligence, which is what a recommender system or something that, you know, a system that basically tells the doctor, hey, this could be four or five different things, these are the properties, these are the reasons why and then the human being applies the judgment that the machine still is not able to do. And I guess the question, therefore, is a lot of concern around this whole thing? Is AI replace human beings? Or will it augment human beings? And my question to you is, you talk about in your book about 40 domains where AI programs perform on par with human beings today? Could you perhaps share some of those things? Because I mean, I think it’d be I mean, 

Alok Aggarwal  22:12 

yeah, I mean, a very interesting one is from Mount Sinai Hospital, very well known Hospital in New York City, probably in the top 20. In the US, if not in the world, both from teaching and research perspective. And they, they created using deep learning networks in 2015 16, timeframe, they created a particular network called Deep patient, where they trained it on people with various kinds of mental issues and problems. And actually it, it actually detects onset of schizophrenia, better than psychiatrist and it’s one of the well known things that we really do not understand. We do not understand most of these mental diseases well, and the very fact that it, so it could be a very good example, where, again, a psychiatrist may want to take that into account. Again, it’s not a decision making system, it will be a decision aid system. So that’s a very, I think, a very useful place. Similarly, autonomous car driving, maybe, maybe, I don’t know, I, in my view, at least a decade away, but at the same time, the car making decision telling the driver to not go into that lane, because we will have an accident or not going to that particular highway, because that highway is has more accidents on a probabilistic way, etc. Many of these things, the likelihood is that since they’re doing so, well, they will be used more and more by humans in, in the commercial world in, in real life. Another example is answering questions. I mean, again, the way the natural language processing systems are improving, they are actually becoming very good at question answering. I mean, not about emotions, again, not about legalese. But if you ask them very specific questions, they are particularly good almost at the level in sometimes beating humans, 

Suresh Shankar  24:18 

because it’s an interesting one, you know, and then and as you rightly pointed out, the whole book is about that there’s always hype, and then there’s the fall, and then there’s the actual catching up with reality. And then suddenly, one day you find it very big. But you talked about this one natural language processing, and you know, and chatbots are the new thing. But four years ago, five years ago, I have not met a single chatbot that is able to kind of accept understand, you know, something where the state is very specific unknown, that is able to adapt to the state of what the question is. Do you see advances being made like yet on the other hand, I’m also seeing I’m seeing the progression in things like Google, voice assistant and Siri and things like that, which seemed to be getting better, but they seem to be get a better understanding the speech to text, not necessarily understanding the context of my question yet, which is what a good you know natural language engine should do. 

Alok Aggarwal  25:12 

And that is precisely the problem with deep learning networks, right deep learning networks by nature understand patterns. So, you can they are trained to understand patterns, they are not trained to understand concepts. So, according to, I mean, along your lines, you train a deep learning network for, for autonomous car driving on stop sign, the car should stop. Now, there are experiments at Carnegie Mellon professors, university professors did I think about for three years, four years ago, where they put a stop sign on the second floor, or not on the ground floor, but on the floor above it, they put a stop sign in the window, and the car stopped. I mean, this is again, talking about context. And in again, it goes back to what the social scientists and non computer scientists were saying in 1960s, that we humans understand the context very well. And your network so far have not taken context into account. That was the case in 1960s. And that is still the case, our deep learning networks are algorithms, other algorithms don’t take context into account. Whereas somehow we have a context we Suresh and I met somewhere in the conference, we will remember, maybe I don’t remember the name, but we’ll remember we, I mean, I talked to this guy, and we were talking about these things, and so on. And that, that is very strongly missing in, in pretty much all algorithms. And I think that will be a hard thing to figure out. 

Suresh Shankar  26:49 

And I want to ask you about this, because you’re as you’re also this mix of a academic practitioner, and a business person. And one of the conversations I had with one of my earliest guests on steps, the algo was about why is it that lots of companies that actually have the data actually can say, understand 10 context, because they have data, they have huge amounts of data, do not necessarily apply that data and that pattern thing to solving a real problem, right? And I asked this the context of the Google search, I said, if Google knows everything about me, which it should do, why is it so hard that I have to do eight clicks to get somewhere? And the gentleman John Kim, he hits He’s the president of Expedia marketplaces. And John told me, that’s because the business model pays you to click the ad. They want to show you more ads, they don’t want to get you to the on the phone to get the answer. Right. And, you know, no disrespect to Google, but I’m saying so much of sometimes it doesn’t seem to be about what the machine can do, or what the data set that exists, but about the business model preventing that from happening. Do you see examples of that? 

Alok Aggarwal  27:52 

Yeah, I think there is. I mean, there is lot, unfortunately, I mean, I can say is unfortunate. But I perhaps that’s part of economic revolutions, or economic movements, I think, I mean, everyone will use it to their best advantage to sell more. I mean, it’s no different between Google or Netflix, I mean, the recommender systems where they show me the same movies, same type of movies that I’ve recently seen, unfortunately, many times they are wrong, I mean, Facebook is no different. So I think recommendation systems by nature are likely to do that. And the advantage recommendation systems have is, of course, they can sell more of their ads or more of their product in the whole process. They think I like it, and therefore I may be tired of it. But they will try and sell me more. So I think that is one of the issues that I mean, in a sense for the world to figure out from a society to figure out, but I don’t think that is stopped, in my view, at least the progress of of AI algos and data in, in getting new new answers or new, more effective solutions. 

Suresh Shankar  29:06 

And as you see this progressing, look, I think, again, this is something to talk about. And you know, we’ve had several guests talk about this fact. And I think many of them find that and for me personally, this is the most fascinating aspect of, of the development, it’s got nothing to do the technology’s got nothing to do the data or the algorithm. But the idea of Explainable AI, that when you give me something, you should tell me why you did it. And while a lot of that is about doing so from a regulatory standpoint, that you should be able to explain to a consumer and whatever else it is, I find that this is almost the basics of something, right? I mean, if I’m talking to you, and I’m wanting to convince you of something, I have to give you my reasons why. And how do you actually think the tech industry or do you think AI industry is going towards this whole idea of Explainable AI? I feel it will come I feel it will come sooner than later. Most people tell me I’m living in a fool’s world. But my thinking is, you know, in the 70s, food labels didn’t have anything now you won’t buy anything without the food label telling you what the ingredients are. So will we soon have that, you know, what are the ingredients of this thing soon enough? What’s your take on this? 

Alok Aggarwal  30:13 

I want to I don’t want to disappoint you. But I agree with your other other guests who said, you may be living in a different world. I actually after the hype, in the second chapter, I write that one of the things is we don’t understand human thought. And in fifth chapter where I talk about deficiencies or limitations of contemporary AI systems, I say we don’t understand machines either. And so we have now a double whammy. So going back to your point that you want to understand what why I came to a conclusion, let me ask the following my wife happens to be a medical doctor and asked this, do you explain to your, to your patients? What made you think that she has breast cancer? Stage four? Versus Stage Three versus stage one? And our answer was no, we almost never do. In fact, doctors around the world have a very interesting habit, I mean, call it bad or good, depending on how you want to look at it, of not explaining to their patients how they came up with a particular particular disease? Or what are the symptoms? And what are the why they why they should believe them, it’s I think it is a matter of trust, that we do it. And my own feeling is, and this is where I go back to the whole issue about AI systems beating humans. And let’s take it 10 years from now. And suppose the system is telling better giving better results, more accurate results about skin cancer then pathologist or or cancer specialist is wouldn’t by nature, when humans say to the doctor, Doctor, did you consult this particular system AI system? Is that what it is saying also? Or would they all not say that look, this is malpractice structure, you did not consult the AI system. So what works? 

Suresh Shankar  32:10 

But isn’t that what we do when people do a CT scan? When people say do a blood test, they ask for it. Or in fact, the doctor himself says let me use that and explain that to you. So I when  

Alok Aggarwal  32:24 

say, Doctor, I want to understand, explain me why you believe I have stage four versus stage three, etc. I think my own feeling is that explainability is very important. For many reasons, we do want to understand causes because we can probably extrapolate it, if I understand how a particular algorithm came to a particular conclusion. Maybe that understanding would help me in other areas, maybe if it said, Look, this person has this kind of a cancer. And if it gave me the reasons behind it with maybe I may be able to actually explain it to a different in a different setting altogether. This is what humans are good at, again, goes back to imagination and other areas of the of human intelligence, which AI is nowhere close. Explainable AI is not is very, very hard right now, it’s almost I mean, it’s trying to figure out what kind of a polynomial the system is computed using a multi layer network. And it it’s, I mean, at least right now, that seems like a very hard thing. People keep saying, Oh, I have an Explainable AI system, but when they say that they have an Explainable AI system, they are not using them anything more than linear systems, to use a mathematical terminology, they are not using any polynomials, etc. Because we have almost no understanding of it, they’re approximating what the AI system is doing by another system, which is more linear, and which is more explainable, and they may be off from each other. And that’s where 

Suresh Shankar  34:01 

it’s a bit like you’re taking a graph, and you’re rendering it as a, as a series of, you know, with a lot of complex relationships, and you’re trying to put it into exactly a very simple set of things, which is hard. And in the 

Alok Aggarwal  34:12 

whole process, you may lose some accuracy, you may not be as accurate, but at least you can explain to yourself, so. So that’s the second. And that’s the second thing we discuss is interpretability. It’s probably interpreting what the machine is doing, not really explaining what the machine is 

Suresh Shankar  34:27 

doing. And that’s a lovely distinction, actually. Because if you look at it, and you use a wonderful word, you said, you know, when you go to the doctor, you trust the doctor, and you don’t necessarily ask, but I learned to trust the doctor over time or I go because of the hospital. I go because the credentials. That’s right and somewhere interpretability and expandability are key elements of trust. You know, I need a certain source credibility. And this is I think, and I’d like to get your view on that, which is this is one of the big issues. I think they are the data industry faces right? Which is can I Can you? Can I trust you a little bit more? Can you tell me a little bit more? Can we have a little bit of a conversation? I mean, you can’t have it in the machine or algo. But you know what I’m saying the human being who stands in between? Can they help interpret this stuff for us a bit more? So, so absolutely. 

Alok Aggarwal  35:12 

So I actually, in that particular chapter, I talk a lot about this whole issue. And my view is several fold. One is that trust takes time to build, whether it’s humans, trusting other humans, or whatever, I mean, we see it in our daily life, I mean, to trust another friend, you you have to have some some longer relationship. The same will be true about machines, you will have I mean, humans will begin to trust machines, if they are consistently better than than the humans. The other aspect is certification, you trust a doctor, if the person is let’s say, an MD from or a fellow from Harvard, you will trust probably, I mean, just the stamp hardward probably makes you trust the person more than if the person is from some college in Singapore or in India. And so right. So I think certifications, 

Suresh Shankar  36:06 

we believe our system in Singapore has been, 

Alok Aggarwal  36:08 

I’m sure our system is very good. Right? And that is again, another aspect, right? Who is trusting whom and so again, the context comes into picture. And you nailed it, actually, because I was going to say, Who’s trusting whom and under what certification, maybe a guy who’s sitting in India, who doesn’t know Harvard at all, may trust somebody from the best college in India, or the best college in Singapore, similarly, much more than a sum somebody in Harvard. So I think certifications matter. And my own view is that there will be a certification industry that will come out just like a another industry that will come out is hey, this doesn’t have the biases, when biases are looking, we are looking at these particular characteristics, let it be ethnicity, race, age, etc, etc. Similarly, a new industry is likely to come up, which would be a certification industry of sorts for machine learning algorithms. So we have a certification for a for accountants, and for engineers, professional engineers, doctors, and so on. And 

Suresh Shankar  37:08 

one thing like this, I know coming in, I presume a lot of this will happen from the United States, or their industry bodies, or their companies working towards this, or do you think it is something that will come when it happens? 

Alok Aggarwal  37:20 

I think that’s why I’m writing near future, my own view is that it will involve in the near future by 2049. Another area, which I think will emerge, which I don’t see it will be professional liability insurance, that it just like medical doctors have in us have malpractice insurance. So if I, as a medical doctor do something really wrong, then I can be sued criminally, but also, financially, I think we and product insurances have are of the similar kind. I mean, if a car goes berserk, I mean, you can sue the car maker, I think the same kind of insurance of various kinds, would come out for AI systems as they begin to be used. And that has not happened either. So I mean, again, in my book, I say look at some of it is obviously futuristic, and it may give ideas to people to set up companies, which is all the power to the world and humans. Because to me, ultimately, AI is all about humans, and we have to make it work for humans. 

Suresh Shankar  38:16 

Absolutely. No, look, it’s been a fascinating, I have at least 20 more questions, and maybe we’ll have you back on the thing. You know, once you get the book out into the marketplace, talk more about it, but I do have a couple of I think ending questions, right? One of the things that’s happening in the world is this whole rise of machines and sensors, right? IoT smart cities, when you have a machine to machine interaction, because now we don’t have the human being in between, is there actually less bias or is it is the bias magnified? Because now there’s nothing even to control add context between the signals that are passing between machines and the and the and the decisions that are being taken by machines? 

Alok Aggarwal  38:55 

Yes, my person, I mean, this is an area of of research right now, this is a fairly I mean, people in various colleges and various research institutions are working on it, my personal belief is that it is likely to be compounded. The reason is, both machines, unfortunately, are trained on data which itself has bias because after all, humans are training it we are biased by nature, so we had training it, that’s part of the data having bias, but there is another bias that unfortunately, explainability creates and that’s part of the problem with unexplainable AI is there may be a pattern that the machine finds out, which we do not even know Right. I mean, suppose without realizing we are sending, again going back to to healthcare, we sending lot of mammograms from a few hospitals, let’s say 20 hospitals, and in one hospital. There are more mammograms with cancer. The machine may learn that this particular hospital has more cancer patients by nature. In, which may not be the case all the time, right? So it may become biased about about that particular hospital. Much worse would be septic systems, right? I mean, hospitals generally unfortunate, they try to avoid sepsis a lot, but sepsis happens. And that machine could get biased about, hey, this hot don’t go to this hospital because it has a lot of sepsis, whereas the underlying cause could be very different. And it is suddenly become biased without us realizing. 

Suresh Shankar  40:28 

It’s such a fascinating thing. And I look, I think there’s going to be a second conversation, I want to have probably going a little bit deeper into some of these areas closer to when you’re ready to launch a book. But I do have one question for you. Which, when I read the book over Christmas, in New Year, I left with this question. In the 50s and 60s, you had Minsky you had during you had McCarthy. And you haven’t covered this in the book, and so feel free not to answer it. As you stand today, 70 years on in 2020. And you’re reading a book that’s going on to 2049. Do you see people of that of that stature, people who are trying to say this is where the future is going to be? Or are we all kind of somehow become in this era of corporatization as it all becoming about people sitting in large companies and building things that make money for them? Do you see those visionaries again? 

Alok Aggarwal  41:22 

Absolutely, I think both of them go hand in hand. I mean, that is how I started the first chapter structure of AI revolutions. And I talked about AI revolutions not revolution. And one is a scientific revolution. One is economic revolution. So economic revolution is very similar to industrial revolutions that have taken place, and scientific revolution, I give the example. And there is a very, very interesting book by Thomas Kuhn from 1962. The title of that book is The Structure of Scientific Revolutions when he talks about how you have Copernicus and Newton giving the first paradigm then for 200 years, the same kind of paradigm continued till we had lot of anomalies we could not explain, and we had eventually crisis after crisis in late in late 1800s, we could not explain simultaneously, by the way, the Industrial Revolution was going on in full force, right? The economic revolution of steam engines, and then mass production, etc, was going on. I mean, and yet, we had a massive, if you look from the scientific perspective, we had a massive vacuum. Because I mean, suddenly Newtonian mechanics or Newtonian physics did not work anymore. And then you had an Einstein. And I think that’s why McCarthy came up with this 1.79 science because he was probably a coon spent a fair bit of time in Berkeley and at MIT. So I wouldn’t be surprised if McCarthy had some discussions with Kuhn, because otherwise, coming up with this notion of 1.7, Einsteins, and point three Manhattan projects, to me looks really, really deep. But he might have had those discussions, maybe not. So I do think that that will happen. I think, there are already some of the very well known people in the Deep Learning Network field like Geoff Hinton from you, Toronto, believe that we pretty much have achieved whatever we could achieve with deep learning networks, and we need to move forward. So I think that the same anomalies and crises are going to appear with respect to all the algorithms that we have currently. And gradually, when families happen sooner or later, there are people who would try and create a new paradigm. 

Suresh Shankar  43:32 

That’s wonderful. Okay, I’m going to make a suggestion to you. You know, you mentioned the gentleman from University of Toronto, I think you should predict two or three of these visionaries who are going to be the next Minsky’s and Turing’s in 

Alok Aggarwal  43:46 

I would love to predict that, but I don’t think I had either that kind of that kind of a vision to predict who will be the next visionaries. I mean, let’s not forget Einstein until 19 105 was a patent clerk, absolutely in Switzerland, and nobody had heard about him until he wrote his theory of the theory of relativity. Right. So. So I think that’s probably the hardest prediction probably, that’s harder than predicting the stock market, in my view. 

Suresh Shankar  44:17 

Thank you. I look, what a wonderful way to end this thing. I think this has been such a fascinating conversation. I mean, the book is fantastic in the fact that you’re talking about 100 years, the past the present, I think you’re talking about all these overlapping waves of hype and bust and like, you know, next wave of hype and all of them going on in parallel. I just simply love the point you make about we don’t know it will happen, whether it’s five or 500. I know my God. 

Alok Aggarwal  44:41 

I just thought McCarthy made it. And actually that made me write the first chapter itself because of you, because there are two simultaneous revolutions going on right I mean, there is an economic revolutions as he said, corporate world, I mean, we saw that with James Watt coming up with the what engine, the steam engine and then everything When corporate, they could care a damn about all the anomalies that were happening in the physics world, right? And then there was the physics world, which was completely stalled. They said, Okay, we have learned whatever we could have learned. The rest is up to God. And we don’t we don’t think we will learn anything early. 

Suresh Shankar  45:16 

So I think it’s been really fascinating to talk about. Talk about, you know, the fact that there is Explainable AI that is interpretable AI, the linear and binomial, so many wonderful topics that we’ve covered today. In spite of everything else, I think you and I are fundamentally hopeful people about the future of AI. I don’t think while we do No, I think the danger is that we could all end up being slaves to the algo. You in particular strike me as someone who is not a slave to the algo is saying to be a master. Thank you very much alone for being on the show. I’m sure we will have you back on it again. And really been a privilege and we are all waiting to look forward. Looking forward to that book release coming out soon. Thank you for being thank you so 

Alok Aggarwal  46:03 

much for having me again. I’m really delighted for this conversation and definitely will continue it. 

Suresh Shankar  46:09 

And to my viewers and listeners I hope you enjoyed the show Dr. Alok Agarwal, CEO of sky analytics author, a polynomial a man with over 40 years of experience in AI writing about 100 years of artificial intelligence stuffs the algo is available on YouTube, Spotify, Apple podcasts and Google podcasts. We release a new episode every week. If you enjoyed this episode, don’t forget to rate Share and Subscribe. Stay safe the age of COVID is not beyond us yet and stay relevant in the age of AI. See also thank you very much. Okay, Susanna, can we stop recording now? And yeah, hello. Can I just say yes, yes. Thank you so much for taking the time. 

Alok Aggarwal  46:55 

Sure. My pleasure. Always