Dark

Light

Dark

Light

Building the Enterprise of Tomorrow with CJ Meadows

How does an enterprise strike a balance between managing human and tech intelligence? Dr CJ Meadows believes that technology is our new colleague, and we need to treat it as such. 

Our most recent guest on Slaves to the Algo says, “We don’t have to have jobs that don’t fit the person. So, do you change the job, or do you change the person? How about just having a person work on the tasks they are appropriate for and manage everything you know. Then the org chart looks like a big network and it’s not static. It changes in real time, every second, as people get started on new things.” 

Dr. CJ Meadows is a thought leader, coach, author, speaker, and entrepreneur on tomorrow’s innovation who has led and advised companies and leaders across the globe. She co-founded and currently leads i2e – The Innovation & Entrepreneurship Center at SP Jain School of Global Management, a Forbes Top-20 International Business School.  She has over 20 years’ experience in Asia, Europe, and North America and her current focus is on Leadership, Creativity, and Radical Innovation. She has published and spoken extensively on innovation, design thinking, entrepreneurship and globalization. 

Dive into an interesting talk on the future of workforce , design thinking, humanoid robots and more! Check out the full conversation between Suresh and Dr. CJ Meadows below. 

About Slaves to the Algo  

Whether we know it or not, like it or not, our lives have been taken over by algorithms. Join two-time entrepreneur and AI evangelist Suresh Shankar, as he talks to leading experts in various fields to understand how they are using or being used by algorithms in their personal and professional lives. Each episode highlights how businesses can leverage the power of data in their strategy to stay relevant in this new age of AI. Slaves to the Algo is brought to you by Crayon Data, a Singapore-based AI and big-data startup.  

Suresh Shankar is the founder and CEO of Crayon Data, a leading AI and big data start-up based in Singapore. Crayon Data’s flagship platform, maya.ai, is the AI platform powering the age of relevance.  

How to listen to our podcast  

Apple Podcasts  
Spotify  

Google Podcasts  
YouTube  

Full transcript of the episode below

Suresh Shankar  0:00   

Hello viewers and listeners. Welcome to another episode of slaves to the algo. I’m Suresh Shankar, founder and CEO of crayon data and AI and big data company, a podcaster, and host of slaves to the algo. And slaves to the algo is my attempt to demystify the age of the algorithm, sharing learnings from myself and other professionals on how they are using or being used by data and algorithms in both the personal and professional lives. I don’t attempt to look at the future as either dystopian or utopian slaves to the algorithm merely seeks to bring alive the use of data and algorithms into our conscious thinking selves. One of the things I’ve been very fascinated with at crayon is the life stories and professional achievements of many female leaders who choose to challenge or break bias. The tech industry has been notoriously challenged when it comes to women representation and tech. A 2020 study found that women make up only 28.8% of the tech workforce. And that’s a steady increase, no doubt, but it’s not fast enough. And at this pace, you know, it could take us a few more decades before we actually help women get to equal representation industry, which is no more than what they deserve. And while the industry is working on making this a reality, what’s also very clear that we need female role models who are already blazing a path for other women in tech, and for men, too. Which is why this point we’re doing a series of mini episodes featuring women leaders in tech, those who are reinventing the technology landscape as we speak. Yes, I know marches Women’s Day, but shouldn’t every day every month, even when they actually so I’m particularly thrilled to welcome to the show today, one of the Top Women in Tech in Asia, Dr. CJ Meadows. CJ is a thought leader, a coach and author or a speaker. I don’t know how she packs so much into a day. But she is a leader on innovation. She’s leading advise companies and leaders around the world. She co founded and currently leads the Innovation Entrepreneurship Center, the SP Jain School of Global Management in Singapore. She has decades of experience in Asia, Europe, North America, radical innovation, creativity, design, thinking, leadership, globalization, she’s really covered a lot in her extensive career. And she stayed very, very abreast of how technology is aiding this revolution in leadership. Welcome to the show, CJ, 

CJ Meadows  2:43   

Thank you so much for letting me be here. 

Suresh Shankar  2:46   

CJ, I always like to start by asking a slightly more personal question, right. I mean, you know, we all tend to look at data and algorithms from a professional perspective most of the time, but we also affected individuals by the sole development. And beyond the usual, you know, the Amazon makes a great recommendation, can you share some examples of some way that you’ve been impacted personally, or professionally, positively or negatively by data algorithms, any particular examples?  

CJ Meadows  3:15   

Actually, I do have one. And I’m sorry, it’s not mine is a friend of mine in Europe, in France, as a matter of fact, and she had just bought a new car. So, so excited, she decided to take her two kids on a road trip. And they were going to see wildlife show and go see some sights and all kinds of good stuff. So it was the early days of booking engines. Alright, so instead of talking to a travel agent, she went to a booking engine. And she typed in where she wanted to go and, you know, various criteria, you know, and she didn’t like what she saw. It’s expensive and is not very nice. And she kind of scroll down and then Oh, I like that one. So she booked it. Alright, so now they get the car, they have their drinks in hand, they’re excited to go and she types in the address into the GPS. Alright. And they go and go and go and go. And then and then they go some more and then they go some more and more. And she thought, gee, this is taking awfully long. And they kept going. And they got to a funny little place she told me, which looked a lot like a shopping mall. She thought, oh, maybe everybody’s routed through the shopping area or something and but there’s no signs on any of the buildings and some folks are routed over there and but most people are going this way and they waved her on. She went and they found the place. It was lovely, had a great time, packed up and decided to go for their big day of wildlife shows and historic sites. Alright, so to get there, the GPS took her back through the funny little area that looked like a shopping complex. And again, some people are routed over there. And then she, you know, they look in the car and woman and kids CIA I go on. And they had their great day and she got went home. So happy. But it bothered her what was that place. And then she Google mapped it, and found that she had gone to Endora, which is actually a foreign country. And she and her kids had gone in and gone out without any passports or right to be there whatsoever. So what does this tell us? A, we need to have really good user interfaces to share with the humanoids. What kinds of decisions is the AI making? And why? Number two, anytime you get something from an AI, you have to ask, did it really answer my question? As a person who loves to shop online, I can tell you for sure, I plug in five tech criteria that I want met by this thing that I want to buy. And then I looked through all of that the AI engine gave me none of the middle five, because nobody has designed it yet. 

Suresh Shankar  6:19   

So we are trying to work on that and crayon. But that’s another story. But it’s so fascinating as to what you say that, you know, when you said there’s the human interface and the fact that we need to ask the AI? What exactly are we seeking from it? And far too often, I think human beings have gone into this default mode of just saying, Hey, listen, something comes out, and I’m gonna trust it blindly. Right? And whether it’s a news and fake news or recommendations, or like, you know, an online shopping or shows. And there’s one behavior that I think this is triggering, is that while there’s no doubt that all this data and AI has made life more convenient. For me, as I’m thinking in a way, would you agree?  

CJ Meadows  7:00   

Absolutely. I myself plug things into the GPS, and just go. But then again, that’s a great luxury for me, because I used to be the person who did all the driving and planned all the trips and worked out all the maps, and then I had to fix the car. So the big question is, what are you willing to give up for that luxury? And what is that luxury, enabling you to do or helping you give up? The other question that I that I think is important to follow on to the story is I was sitting next to a world leading expert in AI, and I can’t remember his name. And if I had an AI Chip, I would know. But he said, Look, there’s two things that AI can’t say to you. And this can cause big problems. Number one, it can’t say no. And number two, it can’t  say, I don’t know. And I was astounded that we’ve built all these incredibly complex machines that cannot say these two very simple but incredibly important things. And you and I were just talking about how you and I get involved in this and that and the other, it’s too much stuff to do. It’s probably our own bias that has worked its way into the machine. And like it or not, our own bias will work its way into our culture, our machines and our children. So we’ve got to be careful.  

Suresh Shankar  8:32   

You know, that is absolutely fascinating. And I’m probably need to do an episode on how we kind of get this whole bias into AI. But it is also so true, because I wish as you were talking, one, why is there not something in my calendar application that tells me “Suresh stop!?” 

Too much? Yes. It just tells me I have a conflict and like, but I go ahead and schedule the conflict and put in one more meeting anyway. Why can’t something say linked to my wellness? And so I think you’re doing too much there. Oh, useful.  

CJ Meadows  9:06   

Funny enough. I’ve got a friend working on that. And given what you just said earlier, it’s a woman tech CEO. Yeah, she’s doing RVI and this is this is important. Yeah, this Well,  

Suresh Shankar  9:21   

I think it’s because women tend to have to optimize so many more things than they used to doing that. So the bias that is a positive one that they’re bringing to the thing is not like a man I want to pack on it things and like get that done, but how do I efficiently allocate my time to the right things?  

CJ Meadows  9:38   

Although if you look at people pleasers, a great many of them are winning women. US I’m sorry, I don’t have a statistic. But it’s it. Don’t be biased now. Everybody does that. 

I think the big thing is successful people do that. And because it helped them be successful. It’s reinforced. So how do we break our own cycles of doing, thinking and ultimately being? And how do we help machines learn to break their own incorrect cycle, we we’ve talked about machine learning, is there such a thing as machine unlearning? 

Suresh Shankar  10:18   

That is, again such a fascinating thing, because I think people are talking about machines forgetting meaning, but they’re talking about forgetting in the privacy sense of the word, you know, on your track many more take me out of take my data out of your system, and the US led the way in that, but the unlearning and you know, it’s also the another facet of this unlearning is the fact that machines or algorithms actually learn more, they need a negative set, if you will, like you know, when you’re doing lending in banking, you want to say I have people who have bad loans, otherwise, I can’t figure out who’s a good learner, right? So you know, they need people who dislike certain things, they need people who kind of say, I don’t want something as well for the machine to work. And far too often, I think, we get into this loop of, hey, this is what everybody is seeing. And you know, we get more of that. And then as you know, a few hours have gone by and if you still stuck in that loop. But anyway, coming back, I think CJ up now you’re an expert in not just the tech side of, of business, but I think on design. And one of the challenges that I think companies face in the news tomorrow, and you’ve written actually talked a lot about the enterprise of tomorrow is how does the organization chart balance? What is traditionally priced by organizations, human intelligence, and tech intelligence, or machine intelligence, which is the New Age led by data? There’s obviously good things to board. But how does an enterprise strike this balance? What does an enterprise that strike this balance look like? 

CJ Meadows  11:55   

Well, I think I would first question whether organizations actually do price intelligence. Many, many organizations are out there that price obedience. And they didn’t actually start in the knowledge age organizations. They began with military which prized a human beings back and physical ability to do things, then we moved into the industrial age. And the organization’s grew quite large, we had to figure out how to do that. And it priced again, people physically doing things, but now it’s physically taking care of machine. Then we move into the knowledge age. And in some regard, we then no longer valued human intelligence, because computers could compute data, all mass, and far more correctly and quickly than we could. But we still needed the humans to do something with the data. Okay, now we’re moving into the creative age, where machines have taken over a lot of physical labor. They’ve taken over a lot of intellectual labor. But they still there still are shortcomings in what can they can do creatively. And when there is no past data. And you need a messy neural net that we’ve never been able to replicate, really, that collects the dots and connects them in novel ways. Now, yes, as some AI can do that we have writing bots, and we have music bots, composers. And then as we move forward, I’ll I can share some truly wacky stuff with you. But what does the organization look like in this new age where we have a new tool that takes over some of our labor, right? We’ve generated hierarchical organizations that I believe will meld into networks far more like the fungible networks inside outside a lot of gig workers who miraculously make a living, and both internal and external marketplaces that are AI enabled to help not just find humans and to do jobs, but find humans to do tasks and coordinate that and tech to do tasks because Tech’s the tech is our new colleague, and we need to treat it as such. And jobs are break broken down into little bits. We don’t have to have jobs that don’t fit the person. So do you change the job or do you change the person? How about just having a person work on the tasks they are appropriate for and manage everything you know, then the org chart looks like a big network and a big mess, and it’s not static? It changes real time every second As people get things and start things 

Suresh Shankar  15:14   

You’re talking about how it looks like a really big mess. And you know, at that point, that question, isn’t that really just a description of the human brain? Because it is a big mess. It’s a really big science. But it doesn’t really do well. And it’s like, you know, it’s somehow we managed to pull things out at just the right time. It knows different kinds of things. So yeah, you know, and I think the military analogy strikes me as something that’s also very fascinating, because, you know, we’re used to running companies like that. We are unable to appreciate the fact that one does the human and tech, but also the fact that maybe the guy actually sitting in the field is the person who knows most right? I had a person on the show, I think, the first season and he said, the person with the data should always win the argument. And far too often in organizations, it’s the person with the experience wins the argument, and you talked about obedience. How to people and what in your own thing? And is it? Is it just a general bias that we think that younger people with data and tech savvy, tend to be less unbiased and be more willing to use data? Or is it like, you know, all the people are? You know, I know about this? I’ve been here 20 years, I’ve been doing it? Is that a bias of my own? Is that a bias that you see in companies? 

CJ Meadows  16:32   

It’s a bias I see everywhere. You know, I was talking to a tech CEO, another one who’s a woman. And she said, until she got into tech, she never realized. You know, the moment she opens her mouth, she’s dismissed in very subtle but pervasive ways. And she noticed a pattern of difference. You know, when somebody asks her a direct question, she gives an answer that she’s hoping will fulfill that person’s need for infrared information, inspiration, whatever they want. And then in in tech conferences, she would observe that a question goes to a man, and then he’ll come back with whatever the hell he wants to say. Maybe related to the question, maybe not. So very. So she learned that sometimes she’s going to have to push her own agenda, and sometimes fulfill the other person’s agenda. But it when we’re in an age of creativity, where we’re trying to understand other people’s needs, and design solutions for them, you need that empathy. So we’re going to see in organizations a lot more network style, which is a feminine style, and a lot more empathy, which is key, not only for design, and product and service development, but CCL did a marvelous studies showing that leaders with more empathy. Their business results are better. 

Suresh Shankar  18:07   

It’s correlated. And I’ve heard about this. And I wondered, right, I mean, you know, that also, I think, given if you take computer networks or internet networks, you know, the most resilient ones are the ones are the most connected ones, because you know, they don’t break down there is no single point. Yeah, traditional organizations have multiple single points of failure. I think to kind of get people think of the organization as a network, by the collective wisdom, in general far exceeds, you know, an individual’s personal experience. Right. And I think this is a bias that I don’t know how many decades is going to take, and you’re an expert on this. So I should ask you, how long do you think it’ll take before this truly networked organization, where knowledge drives the actual decision making rather than anything, any other form of hierarchy? 

CJ Meadows  19:06   

To be honest, I do not trust myself to give you an answer. Because what we’re doing right now, this virtual interactive thing, this was my doctoral thesis 25 years ago. I thought it was gonna happen back then. But, you know, there’s two things I want to share. One is that entrepreneurs and innovators love opportunity and the excitement of the new and they’ll go out and do that thing. Most people, especially organizations set up for hierarchical obedience, do not. They need a burning platform. And before we could all do virtual work, we had to have a pandemic. The other thing and what you just said, I love your mug, by the way. The key thing, the key thing in what you just said is wisdom. So an organization at its best operates as a group brain. As a design thinking team does as any team, and an organization is a team of teams. 

So then we get into Carl Jung and collective consciousness and all kinds of stuff, you know, all this philosophy stuff, and cognition and everything that what used to be your kiss of death, career wise. Now, it’s super important. And these guys ethicists and philosophers are being hired into to take care of somebody’s problem with that. So we start with 

Suresh Shankar  20:29   

the example of give us some example people are actually hiring at this philosophers is to help them with this. 

CJ Meadows  20:36   

Google. 

Google has made real strides from keyword search and search engine optimization on keywords into semantic search. Looking at the context of a query, trying to understand why does this person want to ask this question, what do they really want to know. And given the webpages and information I could give them, what is the best fit semantically, not just in terms of brute force keywords, if you also look back at the origin of Google, the reason Google and Yahoo was so dramatically different was this very similar to this issue, Yahoo, and a lot of other search engines were keyword driven. And it was all like a big in hierarchical index. Google was started by PhDs who knew academic citation rankings. So for people who live and die by publishing perishing up, sometimes both, or you know, and thought leadership, the greater impact your thoughts have, the more likely you are to get tenure and be promoted and hit bonuses, yada, yada, yada. So, academic citation rankings would rank information based on how many people are actually using them, what kind of impact what kind of context yada, yada. So this was far more useful approach than the other one. But what you have to do is you have to get pretty deep into how things actually work, to understand why they work, how to game them, and work them, and how to fix them if they’re kind of broken. So I’m sorry, I’m, I am gonna 

Suresh Shankar  22:32   

do you know, I have a question. And I love Google, because the way they talked about, you know, going and using PageRank versus the academy ranking model. Yeah, I think it’s a company that lost sway completely. Look at the, you know, you go on fire in a Google search, and you get seven ads. Classic case with the business model has taken over all ethical approaches to finding information. And that’s probably a topic in itself. And I don’t want to kind of go there because there is so much that Google could be doing and always ask the question, if Google knows so much, why do they have to give me even so many results? Why the top five results? Not literally give me the answer. But increasingly, I find that a lot of AI companies, and this goes back to ethics of this, the business model is overtaking the actual use of the data for, you know, some doing something useful to you. And certainly in other cases that are not answered, but it take Google Maps and do other things. I can see where that is leading. But I’m going to come back to this idea, you know, to you know, you mentioned something earlier about the human being has to do certain, looking at a set of tasks that are done by better by machines and better by humans. But where do you kind of draw the line? And who draws that line? Right? Because, for example, sometimes, you know, personal conversation and talking to a senior business leader, I said, this is the data, this is what the model shows. And he says no, but I’ve been doing this. And I say all what will happen to my team? And I say, Listen, let the machine do what machine is good doing. Because machines learn backwards. As one of the guests on my show said, I said it frees you as a human being to imagine forwards because the machine can’t imagine forward. And so where do you draw this line? How do you decide what is better done by machine was better done by a human in an organization? 

CJ Meadows  24:17   

Well, I think one of the things that that you’ve got to realize you already have realized you are advising the person who makes that decision. So we’ve already established who our leaders are, and who’s going to make that decision. You know, it all it does come down to data, information, knowledge, judgment and wisdom. And you had mentioned the wisdom aspect earlier. So the machines are really good at data. They’re good at information. AI is actually pretty good at knowledge. But that’s where we start to get into human territory. judgment and ultimately wisdom that we don’t trust anyone else by our own tech to make those decisions now, if you look at, you know, videos like Humans Need Not Apply, that predicts that that shows us the 45% of jobs could be done by machine. But then go back to an analogous example, the ATM, ATMs were supposed to replace tellers. And how many tellers do we have now? More than ever. Why? Because it’s not the same job. tellers now are people who help you solve your problems and help you connect with the company in ways that are useful to you the customer. So what we’re going to see I believe, is not that the tech is going to take over our jobs, it will take over tasks. And not only do we still need the humans to collaborate with the technology, but the humans also can now be freed up to find new needs, design new solutions, and implement them. So why hold people back? 

Suresh Shankar  26:14   

That is absolutely true. It’s going to take a little bit of a mental switch people to learn to trust the data and the algorithm and a second mentors, which if you will, which is probably an even bigger step, to say, this is not a threat to me, it can be an aid to me, don’t you think? 

CJ Meadows  26:37   

Absolutely. See, organizations may find that AI, machine learning bots and so forth make an impact very quickly. And leaders are actually concerned about this. And they are mapping today’s workforce, tomorrow’s workforce, funny enough using computers, and, and making the roadmap to get there because they realize that just firing all of today’s workers and trying to hire in people isn’t going to work. It’s expensive, you got to recreate your culture, you’ve got all new people, and then society is filled with outcasts. I mean, like come on, not a sensible decision. Bad for business bad for society. 

Suresh Shankar  27:23   

Absolutely. And coming back to this idea about AI ethics. And I think philosophers, if you will, for me, this is a fascinating thing, because it’s obvious that organizations of all kinds, for a lot of your data, they’re able to buy or access other parts of data, they are able to put these together form patterns, bar it. But really speaking, like, like, you know, you have typically HR and people think but then you have a values person in a company who says these are the values that I will hold, dear, and I will use everything to shape the policy. And AI this job seems to be that, you know, I will shape the policies in the way in which data and AI can be used to make life of various stakeholders, whether they be your employees or your consumers better. But do you think or are you seeing examples of people saying like, we all know, our Chief Data Officer, Chief Digital Officer, are you gonna have a chief AI ethics officer? of some kind in companies? Or 

CJ Meadows  28:24   

oh, a lot, a lot of companies do this! Yeah, a lot of companies do. And, you know, what happens with bias is that you just set it, you know, data looks backward, people imagine forward. Bias is a thing of looking backward. So we need people to actively be looking forward, combating the bias by exploring the new and by deeply understanding. I mean, one of the biggest biases I haven’t even mentioned yet. Why is gray haired guy talking to a grey haired woman about advanced technologies? Aren’t we dinosaurs? What the hell would we know about it? You wouldn’t believe how many times somebody sees me with my phone and it’s just been a little slow. And they come over and say, Oh, may I help you ma’am. I like no bloody get away. I’ve been programming these things since the Defense Department in 1987. Come on. So I’ve helped them to explore something new. 

Suresh Shankar  29:31   

But you’re right and I think so much of that is a bias. And I’m going to go back to a different point about the bias, right you making one which is that you know, that people tend to look at somebody whether it’s a woman, an older person, a younger person, a person just wears dresses badly with a hoodie and assumes they know something about tech. The question I have is bias can be breaking the bias can be both good and bad. it can be good because sometimes we tend to be biased and overweight, our experiences and you know, not look at the data so the data can help correct the bias. But equally data can lead to fostering a bias, right, like a lot of lending decisions because past lending or like, you know, purchase decisions, because they based on past purchase behavior. So actually, even the issue of what is the bias is a pretty deep topic, everything. 

CJ Meadows  30:26   

Absolutely, you know, one of the things that banks struggle with, is, you know, and this this happened long ago in the US, they would make decisions partly based on your postal code, and then what to do that’s highly correlated with your race. So, you know, what do we do about that? And how do we find new ways to predict who’s a good person to lend to and who’s not. And one of the ways that we’ve come up with is social media, and tracking phone data. You know, one of the things you do in emerging economies where people don’t have credit histories, yet, you want to get in at the base of the market is you track, you know, phone calls and phone activity, does this person appear to have regular sleeping patterns, you might track communication with, with gas, propane sellers, if somebody is ordering gas, propane, they probably are cooking at home. You track what they do on social media and see how they look. And, you know,  try to correlate some of the some of the indicators, you think there might be with how people behave, and then make future predictions based on that. But, again, the humans still in there, deciding what data to track, you know, where to point, the algorithm, you know, either we’ve got our hands in the algorithm, or we have our hands in the data, setting the data set for the algorithm and machine learning to go through. There’s no way to get away from bias all together. But we do have at least an awareness to try. That’s the thing that saves us. 

Suresh Shankar  32:17   

And I had Ian miles on the show, you might know in a couple of seasons ago, and he talked about how you’re going to get Explainable AI. And the fascinating example that we talked about CJ, was the idea that in the 70s, which not a lot of our listeners might be aware, well, there used to be no labels on food or on drugs, even today, you wouldn’t buy it, you may not always read it, but you wouldn’t buy something that didn’t carry the label. And he says every piece of code is going to come with a label, eventually, that says how did the AI pick this up and arrive at that position? Do you see that take coming closer? Or do you think it’s still far in the future? 

CJ Meadows  32:56   

I actually have already seen someone talk about blockchain for fruit. Um, I swear to God, that this is true. You know, putting labels on every individual piece of fruit so that you know it’s sustainable fruit responsibly sourced, and you can see who is the person who picked your fruit? Who is the person who grew your fruit. So we can already do things like that. But the question is, should we? Is it useful? are we filling our data coffers with junk? And that comes back to, to your idea of forgetting. If we’re creating a big mess out of data, the way we jettison so much junk into space, at some point in time, is it all going to fill up? And we need to clean our act up 

Suresh Shankar  33:51   

Absolutely, it can already see that in the rising cost of practically everything that you do in tech today. But I’m gonna go on from the tech and the data and the bias aspect to another area of procedure that is fascinating. You’re a pioneer in design thinking, using the right brain to reimagine things from a user perspective. And typically, if you look at the way people are using data and AI, they’re also talking about using data and AI to create better journeys. And as you sit in between these two worlds, my question to you is do you see them complementing or conflicting each other? And how? 

CJ Meadows  34:28   

Actually, I’m glad you brought up the brain. Because it isn’t just the right brain you use what you want is whole brain team, the right brain, the left brains to get creative and logical also the person like me, if you’re all heart, you’re useful too, because you make a whole brain team. You could be the glue, you know, and then the bottom brain part of our brains is the seat of emotion and action. So you want all of these different styles. I think one of the things forgot to put in as well, is the tech brain so that we have truly diverse Design Thinking teams. So the tech currently what it can do is point out anomalies, extremes, things that we can investigate. But it isn’t necessarily good at investigating. It hasn’t progressed from the days of Pampers beer. Do you know that old story? Yeah. So, you know, we still need the humans to go and investigate, ask the questions and think forward. But the AI team member can be a fantastic resource for identifying lead users. And we go find out how does that happen when why. The extremes, people who really hate your product, really love your product, they can teach you more about your product than you ever dreamed possible. And they do it quick. So your AI can help you find them. But for analogies, analogous situations for fresh ideas, like emergency rooms, and the Formula One pit crew, that’s still a human thinking kind of thing. 

Suresh Shankar  36:22   

And could you explain that emergency room formula? It’s fascinating. 

CJ Meadows  36:28   

It’s a classic, and one of my favorite design thinking stories. So and this is also another thing. AI is good with statistics, we’re good with stories, we got to imagine together. So an emergency room wanted to get better at what they do. And that they hired a design team that said, Well, you What do you mean by better? Well, ah, our service has to be fast, or people could die. And it damn well better be right. Or people could die. So what other industry can we look at for fresh new ideas? Where it’s gotta be fast, and it’s gotta be right. Huh? Formula One pit crews. So the hospital staff and the design team went Formula One, not just for death raises the watch the pit crew, and they kept nudging each other See, yo, hey, see, what do you do? What do we do that? Why don’t we do that. And they came back with so many great ideas, put them into the hospital, and saved so many more lives. They didn’t stop there. They invited the pit crew to come to the emergency room. And said, Well, here’s what we do, how would you do it? So it’s these kinds of lateral connections that can help you to create radical new value. 

Suresh Shankar  37:52   

That is such a fantastic thing. I didn’t know that. I mean, I can see the analogy now that you pointed out. But it’s not very obvious, I keep using the emergency room situation a lot in my own company from a different perspective, saying the way you use an emergency room. The data, you need to kind of solve a problem like that is way different from the way you use it and not getting people in the room itself, right, which is how they prevent something from happening. But when you do these workshops, and you do these design workshops, you’re these very left brain detail technology people. And then you have these and I’m not I’m not imagining there’s a bias here what I’m saying clearly, but then you have these people who are almost these fairly, like, you know, system thinkers, design thinkers who are processing information way differently. How do these two people even interact and get a language that’s common between each other? 

CJ Meadows  38:50   

I’m so glad you said that there’s actually even research studies on this. When people are different. They first exactly what you said they first come down to establishing a language so they can try to understand each other. By doing that, they come to deeper understanding and understand their assumptions. Whereas if you get a unified, monolithic team, they don’t do that. Now, in coming to deeper understanding, and learning how to interact with each other, it enables them to become a more productive problem solving team. The key you know, the reason that we say diversity and inclusion is because you need the inclusion part. So in whether it’s a design thinking team or an organization or the mind of an individual leader, you need diversity. And you need the ability to co create and collaborate and that is missing from a lot of situations. 

Suresh Shankar  39:55   

And I’m just gonna go on from that too. We talked about how you bring this together. And you I know that you kind of believe in Fusioneering, you’re a fusion or yourself. You’ve talked about how just like nuclear fusion leases, energy, technology, business arts all can come together to build massive energy. And can you share with us some examples in some of the workshops you’ve done about how the fusion of what I call data and imagination has led to an explosive amount of energy explosive value creation? 

CJ Meadows  40:26   

Oh, one of my favorite examples is Dr. Karen Stevenson. Now, when Karen went to school, she went to a liberal arts college and decided to study art and quantum chemistry. And her friend said to her a, what kind of job you’re gonna get with that? And she said, it’s a liberal arts college, can’t I study what I want? Yeah, fine, fine, fine, fine, fine. And now as part of her art training, one of the things they did as a test was to have a student examined the brush strokes of a painting, and they wanted to know, the artist and the year. Hey, she got 100% on that exam. lovely lady. She said, I got lucky, this is good. And then they did the same exam the following year and the following? How did it go 100%. Every time, she’s a master pattern match, she can see patterns. Now. After school, she didn’t take a job as an artist, she took a job in a lab as a quantum chemist. And she was sitting on the mezzanine one day, and look down. There’s always people moving around in a pattern that she had seen in quantum chemistry. And she was floored. She was like, Oh, my God, I’ve got to investigate this thing. Maybe there’s a universal theory of interaction, whether it’s chemicals, or people. So she did that. And she went off to get her Master’s at Harvard. And she took classes in anthropology, ethnology, mathematical modeling, computer programming, and business consulting. So her advisor hauled her into the office one day and said, you know, Karen, you really need to focus. And she said, Hey, what I’m focused like a laser beam on this one thing . I’m just drawing from different disciplines to address it. So what she did was she came up with a new way to mathematically model human interaction. And she had to create some of the mathematics in order to do that. She was then brought in when the AIDS crisis occurred to ground zero. And she was told look, we don’t know how this thing spreads. We don’t know where it started. We don’t know what’s going on. Can you model it? And, and help. And she did. Many, many, you know, like, 35 years later, she’s still doing this kind of modeling of humans. And it found its way into her friend Malcolm Gladwell is the tipping point with mavens gatekeepers and all that kind of stuff, she does work with companies like Merrill Lynch, that say, Look, we can promote people and keep them based on meeting KPIs. But if we let go some of these important people who contribute to others success, we’re screwed. We want to make sure we know who are the important people and make sure they’re taken care of and kept. So it’s, it’s relevant for HR now it’s relevant for big data, cybersecurity, international security, and much more. So would an AI have made and have been able to pattern match and get the artist of the year, we could probably train an AI to do that? Yes. Would the AI have seen that pattern in a mezzanine and created a company that’s been called by CIO magazine? One of the 100 most innovative firms in the world? Probably not. 

Suresh Shankar  44:34   

And it’s kind of really, really interesting, such a fascinating story. And for me, it’s what’s really interesting is the fact that if you look at it, we are trying to take all of those things and then there were these pattern matching sporting individuals and then CJ, you know, each other my company people often say, you know, you see things that we don’t see and I’m like, maybe that’s all I do. But what I keep telling them is it’s there in the data. It’s there if you really go on and Look at it except that maybe I’m looking at this differently. But increasingly, when I look at it, there are now firms that are saying I can predict whether a startup will do well whether what this piece of art is, I just read this morning about a new robot that’s been built that contained, it’s called Aida  after Ada. And it’s just been apparently can do it brilliantly and have some really great questions. My question, however, is that as we do this stuff, there’s also a different thing that’s happening is that our brains are getting rewired. And I don’t know for good or bad. For example, and I’m sure you know, this, you know, when we’re young, you know, remembering a fact was a big thing. How many capital cities? Do you know the word now who cares? You just Google it, right? So but I’m sure the amount of space that’s being freed up to the brain is leading to other things that the brain can be useful. I don’t know whether this is good or bad. But I do believe that there’s a whole amount of rewiring going in, as we now start to take some of these patterns that we see and start putting into programs. I don’t know whether you’re seeing this in companies, but how do you recreate the mind of Karen Stevens and into a computer program is, I think the big issue that you’re going to face? 

CJ Meadows  46:13   

You know, one of the things that I heard from another tech founder is I wish I had worked on my character and self enlightenment before starting my company, because now I see all my character flaws embedded in my company. So I guess, good. So I think what we’ve we’re going to find is that our own character flaws are going to be embedded in our companies, our data sets, our AI, our children, our cultures, everything. But again, it’s not necessary, I don’t think it’s necessary, I don’t think it’s realistic to try to eliminate them all. I think it is important to try. And it is the journey of trying, that makes all the difference, because then you create systems that can grow. 

Suresh Shankar  47:15   

Absolutely. And that’s such a wonderful thing and see, yeah, we could keep talking, I had more questions for you. But I’m going to come back and do one more episode perhaps for, you know, a couple of industries and things that you’re doing. But I do want to kind of end with this one thought, you are a tech innovator, you’re a business thinker, and then a design thinker, you’re a woman. What is an example of a bias that you see in this year when the whole team is break the bias? What is the bias that you’d like to break? How are you breaking it? 

CJ Meadows  47:53   

That’s a good one that wasn’t on your list. 

Suresh Shankar  47:56   

must always surprise people. And I think 

CJ Meadows  48:00   

surprises are good. You know, I think one of the things, one of the biases that we need to break is that personal and professional are different. That you have a home life and a work life. And they’re not both your life. We need to be re integrating more of ourselves and bring our entire selves to work as Gary Hamel said, you know, bring your whole heart to work, bring your whole self to work. And if we integrate better and diversify not only our workplaces and employee pools, but diversify our own minds, who knows what we can create? 

Suresh Shankar  48:56   

Absolutely, absolutely. And that’s a lovely thing, but you know, a bias that we have towards personal and professional work and home and we made great strides towards that the pandemic, haven’t we? Because it needed that completely in a bad event to make us realize, and then it’s normal. I mean, I’ve had podcasts where like children appear babies appear. And you know, no one says it’s not professional anymore. 

CJ Meadows  49:23   

Exactly. You know, we went 

Suresh Shankar  49:25   

to your home. I mean, I shouldn’t be, I don’t have a right to be in your home, really look at it. But we’re all in each other’s homes. Everybody’s people talk about working from home, other people talk about living at work. 

 CJ Meadows  49:37   

Exactly, exactly. But don’t forget that we are interconnected and vulnerable now in a way that in human history we never have been before. And we are facing crises that are more frequent and more widespread. And the This shouldn’t be in your home. Things started with television. When the Kennedy assassination and the Vietnam War were televised in people’s homes it made it made far away things personal, and real. And when things become real and personal, then we get up off our butts and make the world better. 

 Suresh Shankar  50:19   

There is one thing that I will personally face and I just want to share that it’s really not our AI or data. But I have no find that in this whole blurring that happened during the pandemic, in a way, I guess it’s an entrepreneurial one too, in some ways, you know, your life is already integrated. But I found that I did over blur everything. Oh, yes, one another. Right. And so now I’m actually welcoming the transition period. I’m now realizing at the end of this episode, I’m gonna go and take a taxi and go to the office. And those transition moments are also important moments, because they, it’s not like a boundary, but it’s like I use the word transition for that reason helps you pass from one state of mind. And sometimes I think one of the things that happened in the pandemic is that it is blurred, everything blurred into one thing after the other. 

 CJ Meadows  51:10   

One of the one of the things that that people have really struggled with, who have never worked from home, is that the whole separation thing, and the culture of a workplace, they had to replicate themselves. When in a workplace, it’s already been done for you, oh, sorry, I can’t talk Honey, I’m at work or I’m in a meeting or what have you, you know, try sitting in your living room and do that. So we do need a we have needed all of us the ability to set boundaries, and create culture, even in our own homes. And when you gain those abilities, you have gained very big bricks in the wall of leadership. And we all need to become leadership because in a network, different leaders will emerge. Exactly leaders will emerge as they’re needed. So be prepared. 

Suresh Shankar  52:14   

That is such a lovely thought to end this episode on CJ, thank you. I’ve taken out so much from this episode. And I think I just love the piece on fusioneering and Karen Stephen, the example. We talked about the fact that there are biases towards the grey haired people. And women who kind of say things at conferences, so many little nuggets, I think in what you just said that we’d kind of go to take those out and put that into a mini episode. We will. We’ll be back with you for more. Thank you for being on the show. It is really a pleasure to have you with us. 

CJ Meadows  52:51   

Thank you so much. It’s been a pleasure to be here and I look forward to speaking with you again and learning from you. 

Suresh Shankar  52:57   

To my viewers and listeners. Thank you for listening us listening to us today. slaves to the algo is available on YouTube on Spotify, Google and Apple podcasts. We release a new episode every week, sometimes even more frequently. If you liked this episode, don’t forget to like, share and subscribe. Remember to stay relevant because we are in the age of data and AI and we do not want to be a slave to the algo see you all next week. Thank you. 

CJ Meadows  53:24   

Thank you 

Author avatar
Sruthi Ravishankar

Sruthi is a ‘Brand Mom.’ She believes that to see a brand do well in the market, is almost like proud parenting. Currently, Sruthi is a Brand Marketer and Storyteller at Crayon Data.

Post a comment