Slaves to the Algo | Innovation for Impact: Powering the Scottish Economy with Data and AI | Gillian Docherty OBE

Leadership   |   
Published March 29, 2021   |   

Computer-scientist and business leader Gillian Docherty joined us in this week’s episode of Slaves to the Algo to talk about how data and AI can transform outcomes across multiple sectors for an entire country: Scotland. Appointed an Officer of the British Empire in 2019 in recognition of her contributions to technology, Gillian founded the Data Lab, Scotland’s Innovation Center for data and AI in 2013. Gillian began by describing the incredible strides AI has made in healthcare and well-being, from detecting dehydration in homes for the elderly to improving remote communication between patients and doctors.  

Gillian then highlighted how data helps plug gaps in societies and economies. The Data Lab is doing so through its own Master’s program and by funding students to pursue their Master’s degrees in specialized fields where businesses and industries have identified a significant gap in skills and knowledge. Gillian also touched on privacy and the importance of regulation, citing the Data Lab’s collaboration with the Scottish government and its network of companies and councils to create a framework focused on developing ethical AI. A member of the utopian camp, Gillian ended a great conversation on a hopeful note: that we are all building a future our children can look forward to enjoying. 

About Slaves to the Algo

Whether we know it or not, like it or not, our lives have been taken over by algorithms. Join two-time entrepreneur and AI evangelist Suresh Shankar, as he talks to leading experts in various fields to understand how they are using or being used by algorithms in their personal and professional lives. Each episode highlights how businesses can leverage the power of data in their strategy to stay relevant in this new age of AI. Slaves to the Algo is brought to you by Crayon Data, a Singapore-based AI and big-data startup.

Suresh Shankar is the founder and CEO of Crayon Data, a leading AI and big data start-up based in Singapore. Crayon Data’s flagship platform, maya.ai, is the AI platform powering the age of relevance.

How to listen to our podcast

Apple Podcasts
Spotify
Google Podcasts
YouTube

Full transcript of S2EP4 below:

Suresh:

Hello viewers and listeners, I’m Suresh Shankar, founder, and CEO of Crayon Data, an AI and big data startup, headquartered in Singapore. And I’m delighted to welcome you back to season two of my podcast: Slaves to the Algo. Slaves to the Algo is my attempt to demystify the age of the algorithm. I plan to share my own learnings and those leading professionals in their fields, to understand how they are using, or being used by algorithms in both their personal and professional lives.

And today, I’m delighted to have Gillian Docherty, a computer scientist, a business leader, and the co-founder and CEO of the Data Lab. Gillian worked for 22 years at IBM, that’s a long time! I spent two years and I had to get out, before founding the Data Lab. But the interesting thing about it is that the Data Lab is Scotland’s Innovation Center for Data Science and Artificial Intelligence. So, it’s the first time we are having somebody on the show who actually represents a national body. Gillian’s also on the board of the tech partnership and the Glasgow Chamber of Commerce, and she was named as one of the most influential women in technology in Scotland in 2019, and she is an Officer of the Order of the British Empire in the 2019 Birthday honors, another first on the show. Welcome to the show, Gillian!

Gillian:

It’s a pleasure to be here Suresh. Delighted to join you.

Suresh:

Gillian, you know, we first met in Singapore 18 months ago on a day when we had a physical meeting and you know we were talking about stuff, face to face in a panel. And I think your journey has been fascinating. There are many facets to your career and your experiences… the turns you’ve taken. When I met you in Singapore, you were full of great stories. I also know that you have a great story about Charlie and Jarvis, and we’ll come to that right at the end.

But first, up, let me start with a slightly more personal question out here. And I like to start with this because you know, while we’re all professionals, we’re affected as professionals by technology, we’re also affected as individuals by the developments in AI and data. Can you share some examples of some great algorithms that you’ve come across, that have impacted your life, either positively or negatively? In some ways, you know, there are so many things that we don’t even know are run by algorithms. So, could you share some examples of how your own life has become either useful or dangerous because of algorithms?

Gillian:

Wow, that’s a great question to start with. I probably, you mentioned, great stories, and I love stories, I think stories really bring things to life for everyone. And the ones that always get me and the ones that always get people that I speak to are the ones that are in healthcare. So, the use of algorithms to improve health and wellbeing. And we’ve worked on lots of different projects from using computer vision algorithms to work on complex neck cancers to assess the oncologists to integrating patient experience measures with their clinical treatment data to improve their clinical treatment pathways. And those are the ones that always get me.

As you know, we all get older, we all have health complications, our families do. And for me, it’s the use of algorithms that really improve that for patients and for citizens. So how does it improve their treatment pathways? How do they improve how they feel? And how they feel heard, which is another thing. So, you know the treatments, not just done to them that they actually feel part of it. So those are the ones that really, I think hit home to me, and they’re also the ones that when you’re chatting about what you do not at any recent dinner parties but certainly in the past if you chat that you work in technology and you work with artificial intelligence, the natural reaction of people is to look at that and kind of almost… the robots are coming, we’re all doomed. But, when you really start to uncover, well actually it’s helping doctors it’s helping oncologists it’s helping radiologists treat you better, should you have any illness. I think it’s, it’s very, very powerful.

Suresh:

Could you share some examples? I mean, you know, I’m sure you have many but something that so vividly brings along it can do good.

Gillian:

Sure, so one of the ones that I really like was a project in the Highlands of Scotland, and it was with a housing association, who were building fit homes, and the homes were fitted with a number of sensors. Sensors gathering data and algorithms were built, analyzing the data from the sensors to help determine whether the individual in the home has a higher probability of falling. As we all know falling can be extremely dangerous for the elderly or infirm or people who are challenged maybe with mobility. And could we, using data from sensors, build algorithms that will predict when an individual has that higher probability and then intervene with care, social care, let their families know. And that was a really powerful project.

And we work closely with the NHS, professionals who helped us look at what are the characteristics, or should we say see challenges that someone might have that would lead to those higher probabilities. So, dehydration is one. If an individual is dehydrated, they are more likely to have a fall. So, can those sensors help indicate whether the individual is drinking the same as they normally do, whether they are not, are they moving around the house in the same way, and using that to hopefully improve their care and the ability for them to live independently.

Suresh:

You’ve just given a new meaning to the eight glasses of water!

Gillian:

Absolutely.

Suresh:

I’m not in danger of following it but I think I’m going to kind of be a little bit more careful about that. But. that’s very interesting because that’s a small thing and you know, I would have expected you to say, oh people fall, because you know when they’re going down steps or something, but no one thinks about dehydration. Are there any other such examples? I mean you know because these are the fascinating ways in which people don’t realize how data can actually help improve their lives.

Gillian:

Yeah, I think, I think there’s, there’s a huge movement around that health and wellbeing in integrated data, so data from either device you wear, smartwatches, other experience, how you’re feeling and how that is integrated with what traditionally has been siloed healthcare data from your doctors and your clinicians. And, actually, a more holistic view of the individual is much more powerful. And you also feel much more listened to. So, we did a project a few years ago as part of the Cancer Innovation Challenge.

And one of the projects was gathering data from the patient, how they were feeling. Feeling pre-treatment and post=treatment, and then integrating with your clinical treatment data. Now, normally. you wouldn’t see your consultant from one appointment to the next. And, by the time you go and see them, maybe patterns of how you’re feeling haven’t been spotted. But, every day, if you’re just filling in an app or website, that, that just gives you the chance to say, I’m not feeling great today, and kind of give a bit more detail about how you’re feeling.

Actually, we can spot different patterns of how people are coping with treatment that will hopefully lead to improvement in the treatment pathways of the future.

Suresh:

And that’s fascinating. I am going to come back to you because you also do a lot of work in data policy. About this part of it is a good part and this obviously, the data is being collected, how’s it being put to use, privacy issues, etc. But we’ll come to that later. Moving on to what to me is very fascinating. I mean you know the first time I heard about the Data Lab. You’re headquartered in Scotland, your founding mission. And I’m just going to read it out, quote, help Scotland maximize value from data, and lead the world to a data-powered future, unquote.

That was fascinating because a country is beginning to think about AI as a competitive edge. So, can you tell us a little bit, because you quit your commercial IBM job and you went to do this, how does this whole idea of a country, looking at AI and big data as a competitive edge come about? And what’s the, what’s the rationale there?

Gillian:

Absolutely. So, in fact, the data lab is part of a bigger Innovation Center program, and it really came from the understanding at a national level, that we had some amazing industries, those industries didn’t invest in R&D to the level that we had hoped. We also had some academics in the country, but they often work with companies all around the world and maybe didn’t build the local relationships that would help some of those companies invest in R&D. So, from that understanding, the government and its agencies decided to create an intervention and that intervention led to the creation of eight innovation centers focused on either an industry sector that Scotland has a great opportunity in or a technology area that would impact many sectors that we felt we had academic capability and expertise that we could bring to bear to help the country and the country’s businesses. And from that program, the Data Lab was born.

So, we had some great access in terms of data science and AI and every university has one of the first AI departments in the world in the 1960s, it was much more theoretical at that point. But we had a great set of foundations, but we had to intervene, to make sure more companies became data active, that they knew how to get started. They knew how to get help. We have quite a significant skills and talent program because probably the biggest inhibitor to building better with data and driving value from data is skills. And that’s at every level from your board to your senior executives, down to your data engineers and data scientists and teams.

And so that’s where the idea flourished from. We’ve been going for just over, just coming up to six years and what has been I think really key is giving us the freedom and the flexibility and empowerment to do what’s needed. So, we receive public funding to do what we do. We are given, you know, it’s a broad mission. A big mission. But we are tasked with identifying where the gaps are, where the needs are for our economy, for our businesses, and how do we build services and support to help plug those gaps.

Suresh:

I’m going to kind of just drill down a little bit because the second thing that struck me about the Data Lab is that you are very precise and have a quantified objective that should appeal to data geeks, right? And, again, quote, “over the next five years to achieve the following economic and social impact in Scotland: 665 MSC students, 104 collaboration projects and 590 million pounds of economic impact.”

And I found that again very fascinating because if you’re dealing with data, that’s data. And how do those numbers come about? And what is the scale of, you know, this team, this collaboration between government and presumably public sector, private sector, there’s teaching in terms of the students and there’s the economic impact? So, a broad variety of things out there. So, how did you come up with those numbers? I mean.

Gillian:

Yeah, so these numbers are for the five years from 2019 and they were based on our experience from the first few years of running the Data Lab.

Suresh:

Did an AI come up with those numbers or did a human being come up with those numbers?

Gillian:

Human being, human being. But it was based on analysis from the data of the first few years of work. So, we know, uh, the projects that we were working on, what like the impact was, and for every project that we do, we asked the company a few you know we’ve got quite an extensive engagement process where we work with a company to understand the challenge, to understand actually what kind of new products or services they would like to build. If they were successful, how many more jobs do they think they could create in Scotland? How much additional revenue and profit to business? And then we can invest in the project, and buying an academic team, or other support mechanisms. I have an in-house team of data scientists we can deploy to projects as well, where we help the company on that journey. So, we can see a direct line of sight from your investment, and your support for new products or services, or business models in the market.

Two new jobs being created, and additional revenue and profit for the company. It is very, very tightly managed. And that is where those types of numbers come from. In terms of the master students, the studentships we offer is we pay student fees for master students, we bring the cohort together for meta-skill training and employability training, we challenge them during an innovation challenge week for businesses, and then probably one of the most important things we do is find placements for those students and industry for the Masters project. That starts to build relationships with companies, many of our students move straight into full-time employment with those companies that lets them try out, you know, is this the right fit. Culturally, values, type of work you do, and what I think is really important is, so this year, for example, we have 160 studentships we’re supporting across 13 universities and 26 different Masters courses.

So, we have some that are very broad, you know, data engineering or data science or AI, and some that are quite applied. So, the use of AI in health informatics, or the use of AI in financial management, and gives us that balance to generate the kind of talent we are hearing from an industry that is required. And that’s a great thing to be able to do.

And what it has also led to is the creation of brand-new courses that previously a university may have taken longer to launch because they have the certainty of a number of funded places, they know that they can make it viable. And, therefore, we see a proliferation of new talent that has been really beneficial so that we can have that breadth of talent, with different skill sets needed, and it’s allowed university partners to scale those. So, we, in some courses we only fund maybe six to 10 places, but those courses now have 100, 150 participants. So, the other students are paying their own fees, but it’s creating that wealth of talent. And that has seen great investment with companies coming and building the data teams here and that’s in the context of a broader Scotland, come here because it’s a fabulous message.

Suresh:

No, I think, I think people are coming to Scotland anyway without the AI and the Data Lab. But I think what’s really, you seem to have made you know, if I may call it this, a clear template of the pathway from teaching people, to working with the government, to quantifying the impact, to actually measuring the impact. Would that be, is that something that’s exportable?

Gillian:

Absolutely. I think when my travels and when I came to visit you in Singapore, similarly, even in parts of the US, East Coast, West Coast, you know, people said, are you coming to set up the Data Lab here? And I said well, anyone can set up the Data Lab. But there are some key ingredients that are really important. It’s a government that understands and really believes in the impact of data and AI on the economy and society. It’s the freedom for an organization like the Data Lab to exist and to do the right things, not every country you know you, I don’t think it’s as straightforward as a cookie-cutter, you know, you do the exact same things, you know.

If you set up a Data Lab in Singapore or India, or in Indonesia, it needs to be relevant for that market. You know, so we might work with fish farms. Aquaculture is a big industry for Scotland. Food and drink or the whiskey industry, you know we do a lot of work in that industry.

Suresh:

I think that’s something that our viewers and listeners would really be interested in. How is your data and AI improving the quality of the scotch that we are consuming?

Gillian:

Oh, my goodness. I think I might be giving away some trade secrets here Suresh. So, I’ll need to be really careful. But actually, there was a project about whiskey tint. Whiskey tent is, is a really important aspect of selling whiskies, the tint and the color of it. And one of the projects was building an algorithm that analyzes the battle type and the impact on tint, and the environmental impact of where the bottle was kept, in what kind of facility, temperature, humidity, all of those things. What are the ingredients? The kind of environmental ingredients, the bottle, the physical ingredients that has the biggest impact on whisky tint?

But I can’t say any more. Otherwise, I might have a whiskey industry chasing me for giving a trade secret.

Suresh:

No, that’s fascinating and I just literally meant that as a joke, but I had no idea that there’s been so much work done on the tint. And, to me, it kind of points out this whole thing, right? That there is data and you talked about people falling and dehydration levels and now you’re talking about whiskey and you’re talking about, you know, so many different ways in which data is taking over this world. Is there any other example that you can share it from a governmental or public sector perspective, which is as fascinating as to what the data labs enabled, you know, in terms of making AI more usable?

Gillian:

Yeah, I think it’s really interesting, and you caught me at a great time because yesterday, just yesterday Scotland launched its AI strategy. And so, I would encourage all listeners and viewers to go and have a look. And Scotland’s AI strategy is built around our people. The tagline is trustworthy, ethical, and inclusive. And at the outset, it has been a process we’ve supported the government on for the last 18 months and at the outset, we will never be able to invest in AI in the same way as some of the world’s largest economies. But what we can do is we can build the AI that works for our people. The AI that is trustworthy, that is ethical, you know, the creation of an AI playbook that will help our businesses and our organizations.

So the launch of that was really crucial, and the way it was built over 18 months with significant consultation, both with the public and with people working in this area, and with international leaders. So we created a number of working groups from data infrastructure to ethics and regulation to joining the dots with what else is happening, either in the UK or globally. We had public consultations and we run public engagement workshops so there was a great one, Suresh, that I’m sure you’d love when we had a multi-generational workshop where we had, you know the teenage child, the mom or dad, and the grandparent, and we explore well what does he mean to you and what are your hopes or fears and a lot of the time we had to start with what even is AI? I don’t know what it is. But, the resounding feedback from the public was that actually, they were quite excited by the end of those workshops.

They were excited about the opportunity that it created. And just to give a few, you know, specific examples of what’s happening in the public sector in Scotland. We, we jointly support your network Scottish Government accelerator, data accelerator. So, each year, a number of projects come through, where there’s a group working day to day. Maybe data analysts, not quite data scientists yet, or AI engineers, believe something they are doing, could be significantly improved by using new techniques, and to just pick a very simple example was the census which the UK is due to run again in 2021.

Where traditionally selecting your employment type was a real challenge and massive data-wrangling had to be made after the fact to make sure categories of employment that were gathered during the census were accurate and meaningful. And actually, we built algorithms that would significantly help that analysis, and which could help a significant amount of time, make it much more accurate and then usable in the future. And these are just little simple things that can have a big impact on a much bigger project/program. Another one was…

Gillian:

Sorry.

Suresh:

Go ahead, no, please.

Gillian:

Just one more that I really loved was the delete-discharge project with the NHS. So, I don’t think this is unique to Scotland but if a patient gets admitted into hospital for treatment. Often that treatment and illness lead to, when they come to be discharged from hospital, they need additional care packages or maybe alterations in their home. That, you know, whether it’s a walk-in shower or a stairlift or handrails. And often a patient gets stuck in a hospital, while those things happen, because it’s not processed until near, near the time where they’re allowed to be clinically discharged. And we worked on a project, could we predict which patients that was likely to happen to? At the point of entry to the hospital rather than a clinical discharge team and the algorithms joining different data sets together was 97% accurate in predicting which patients were likely to become blocked in hospital at their discharge time, and therefore…

Suresh:

That is a mind-boggling number if you’re telling me that when a patient walks into a hospital in Scotland, you have an algorithm that can predict the 97% certainty when they will be discharged,

Gillian:

No, not when they’ll be discharged, but the likelihood of them becoming blocked in hospital, because they will require further assistance on discharge, and that allows you to start to look at intervening to build those support mechanisms.

Suresh:

Okay, but it’s still 97% on any algorithm is like a phenomenal number.

You know, it’s very interesting, you know, we talked about trustworthiness and you know you’ve talked about data wrangling and you know, things like you know the employment type, and most people think about data and AI and they think about cool social media apps and a scroll. But the kind of work that you’re talking about is, is deep, it’s detailed. There’s a lot of it, it’s about just making sure the source data is actually as you said, trustworthy. To be able to make those, you know conclusions that AI can do. And it’s something that gets ignored a lot so what’s your take on that?

I mean this whole idea of data wrangling and the data janitor, the guy who gets it together, unsexy, but really the heart of the algorithm is in that quality.

Gillian:

Oh, absolutely I think, fundamentally, certainly data engineering skills are highly, highly sought after and a lot of the big companies you know the Spotifys, you know, the big companies who are right, you know, building huge pipelines of data science algorithms and machine learning and deep learning. What, what is really challenging, it’s great to run a little experiment, it’s great to build something in a pilot phase and you can check the works, you can check its efficacy. One of the biggest challenges is scaling that, and then making sure your data pipeline that feeds, you know, end to that is, is it the same? Is it changing?

And that’s a real challenge when you’re looking at the scale. And I think we’ve certainly seen, you know a huge increase and challenge in that space. And I think many industries are looking at scaling algorithms. The data pipeline required and the resilience of that data pipeline is really key.

Suresh:

Yeah, and you mentioned the strategy, trustworthy, inclusive. I want to go to the inclusive. Your group in partnership with some others recently released a fascinating report called Mind the Gap, in which you talked about the Scottish data gap. And what is the idea of a data gap? And how does it affect societies? Because I just found that one of the most fascinating parts that I’ve come across recently.

Gillian:

Yeah, so, a project that we worked on as you said is with the Scottish Council for Development and Industry and other partners. The Scottish data gap really is the gap between the health and social care data that we currently collect, utilize and share versus the health and social care data that we think need to collect and utilize and share in the future. And it can be generated in many different contexts from clinical care or say things like hospitals, could be in pharmacies, it could be in care homes to community and commercial settings like online retail or wearables or Internet of Things. And it’s actually those in that breadth of context, how do we drive new insights and accelerate new innovations to drive our wellbeing economy? Our well-being economy Scotland is one of the few countries in the world that has really committed to becoming a well-being economy. And if we have a healthy nation, we’ll have a healthy economy or a healthier economy. And so, the report and the project really looked into that, identified a few areas, about why we need to do it, and about how we possibly could do it.

Suresh:

And you’re saying that, I mean I get the idea of the wellbeing economy though I think it’s down to the Scotch that you’re ordering. Leaving that aside for the moment, you’re saying that you’re collecting a lot of healthcare data, and in various sources, and you want to collect more, or that you’re not able to put together all the data that’s already been collected, and that’s the gap, which one is more of the gap, really?

Gillian:

Oh, excuse me, um there’s this gap in both areas. I think probably the gap of bringing it all together is probably bigger and more challenging. I think we collect a huge amount of health data in Scotland, we all have a community health index number called “Chi number,” and that is a unique identifier that is used for all healthcare providers.

But, and our data is brought together in something called a safe haven that is available for research purposes, where you can bring datasets together. But I think there’s more work to be done there. And again, ethically and trustworthy and trustworthy needs to be at the heart of a company.

Suresh:

And, we’ll come to ethics in a moment Gillian but it’s fascinating because health care costs every economy, anywhere between 10% to 2021% of the economy in terms of the amount of money that’s spent on health. Apart from of course the emotional impact that has on us as human beings. This seems to be something revolutionary that you can actually if can quantify the data gap that is existing in order to become a more well-being society. This seems fundamentally revolutionary to me.

Gillian:

Great. Great, you find it revolutionary. But I think it’s really if you get under the covers of it. It actually probably is common sense. It kind of makes sense to people who work with data, understand the potential that it has. Now, certainly in healthcare and clinical settings, you know, do not take humans out of the loop, you know. Support your oncologists, and your radiologists and your doctors, with better and more informed data, more connected data, with other data sources that give them again that full picture. But I think it’s important, I think, for, for us as a country and I think for many countries around the world to get that more holistic view and to support their citizens, the people, and those in health and care settings.

Suresh:

I have three more questions. So, Gillian, I think we can carry on this conversation for hours being two people who both love data and how we can use it. But I do have, you know, we talked about the third pillar: ethics or the ethical part of this AI strategy. One of the big concerns that people tend to have, and especially in areas like healthcare, is about you know the bias and the ethics. If I give my data up, who is going to be able to get hold of it? Is it data secure? Will it be used to make my insurance policy more costly? So there’s a whole lot of questions around bias and ethics in AI, and you know that’s probably a podcast, in itself, but I just want to ask you a simple question.

It seems that in the last 10 years the growth of data and has largely been left to the private sector to do whatever they want. But now as you go forward. What role is the Data Lab playing? How are bodies like yours getting involved with regulators and governments, to make sure that we actually have fair and ethical use of data and AI?

Gillian:

I think you’re absolutely right, it has predominantly been the private sector and it’s, it’s also been behind closed doors. So, more often than not, we’ve not known, that there’s been algorithms and data being used in certain ways, I think we’re much more aware now than we were. We saw the Cambridge Analytica-Facebook scandal and other things that have come into the public domain over the last couple of years have made us much more aware as citizens, about what’s going on with the private sector. But you’re absolutely right, as the public sector engages with the opportunity there is a raft of questions not just around privacy, bias, unintended consequences, which is a huge challenge. What do you do when you uncover unintended consequences? There’s a great saying that if you don’t have a diverse team working on this by default your algorithm will be biased. So, how do we bring the diversity, the diversity of thought, the diversity of challenge to really try and understand before we proceed what potential unintended consequences could there be?

So, Scotland as part of the COVID response and recovery has built a data and intelligence network between many parts of the public sectors where local government or central government, or health or care centers or universities. And as part of that data intelligence network, we’ve helped support the government and the creation of an ethic, which really is going to be used as a lens for each project that the network looks to solve, or to work on.

I think serious and suite of questions that are really critical that needs to be asked. And that will hopefully help identify potential privacy challenges, bias challenges, ethical issues that will allow that team then to evaluate how they minimize those or eradicate them and whether even fundamentally should the project go ahead. And I think these, these questions are, are really really important and it’s great that we’re now having them, I think there’s much more to be around ethics and bias than there was three or four years ago. It was certainly needed, and I think it will never ever go away. I think, if anything, you know, we need to become more robust at asking those questions and understanding of how we, how we progress with areas where, you know, those, those questions give us answers that don’t align with our values and principles. They’re not trustworthy, they’re not ethical, and maybe they’re not inclusive, and having the confidence and ability to say actually we’re not going to proceed because we can’t see a way of doing this than meets the values that we have as a nation.

Suresh:

So if I’m understanding this right, the ethical framework essentially is literally like a checklist about what was done how it was done, etc. so people have to kind of put this out there before they actually go ahead and do the research is that, is that right?

Gillian:

Yes.

Suresh:

I mean it’s like Dr. Atul Gawande talks about the Checklist Manifesto, it’s like it’s a bit like that.

Gillian:

It’s, it’s more detailed than that. So, it’s open questions but it’s not necessarily just a check, you know, have you checked this? Yes. It’s not a tick box exercise. It’s much, it’s much deeper in terms of understanding about what the project’s trying to achieve and how it’s going to do that. I’m trying to uncover potential areas that, that, that would have been unintended or maybe an unintended consequence of the particular area or project that has been worked on.

Suresh:

That is, sorry go ahead.

Gillian:

No, no, after you.

Suresh:

It’s fascinating because I was talking to Ian Myles and he talked about Explainable AI and you know he said and you know he said a very nice example, in the 70s food labels didn’t carry any nutrition information. Now whether or not we look at it we expect the nutrition information to be there saying this is what is right. And when she says you know your AI is going to come with a nutrition label, and Apple in fact has started putting that nutrition label out on how the data was collected or making strides to this. And to me, this seems like the right direction for AI to go, as opposed to the very dark everything is invisible, no one knows what’s going on, which seemed to characterize the last 10 years. So, do you see that more and more companies and more and more governments will encourage this whole open AI? Do you think this whole, oh I did what I want, I don’t know how I got the data? What do you think is the future going to be utopian or dystopian?

Gillian:

Oh, I’m absolutely in the utopian camp, without a doubt. I think but in order to make sure that we stay there, I think it’s about making sure people are educated, and we’ve done some work looking at thing kind of understanding what does a data citizen look like? What does a data worker look like? What does the data professional look like? And we’ve got to inform people because companies will not be allowed to do the Blackbox dystopian view if we don’t buy the products, or we don’t engage with their service, and therefore we need to educate people to, you know, as you said, read those labels, you know. You can go for the red thing but no you shouldn’t have a lot of that because it’s not very good for you.

But we’ve been educated to read those labels, understand them and understand what’s good for us, what’s moderately good for us, and what’s really not good for us. I think the more we educate ourselves as to, to that. And then, the consequence of not wanting to engage with companies that are not explaining how they’re doing this very well, is more powerful than anything else, it’s, it’s, you know, yes, we could regulate it, but can we really keep regulation ahead of potentially bad actors?

Or actually, if people don’t engage then there’s, there’s no reason to be a bad actor because you don’t have a company or a product or a service that people will engage with. But absolutely, utopian engaged informed, explainable, definitely, that’s, that’s where that’s the camp I’m in.

Suresh:

And that’s wonderful to hear. I mean I’m kind of most of the time I’m the utopian camp, and then some days I’m like oh my god look at this so dark I mean I don’t think this can be controlled human nature’s too dystopian. But I’m going to kind of you know, that’s probably a conversation in itself like I said. So, I’m just going to, and I’m going to kind of point this out to all our viewers and listeners. I’m going to ask Gillian to take a quick peek at the future. Didn’t four years ago I think you deliver a TEDx talk on 2037.

And you painted an almost like it was like a black mirror-esque view of your daughter Charlie’s life in 2037? You concluded by saying that, you know, we need to lead today to make sure that our children are not disrupted but aided by such technologies. So, I know there’s a whole TED talk and you know it’s a TEDx talk, everyone can go and see Gillian Docherty 2037: Who’s leading? Who’s following? But I just wanted to ask you, if you look back now three years or four years from that moment, are we leading? You believe that we will be masters of the algorithms, or are we becoming more slaves? What do you think your daughter’s life is in 2021 when you look about 20 years from now?

Gillian:

Yeah, I, I look back and I’m optimistic. I’m hopeful. Back then, and the talk was meant to provoke. It probably equally scared and excited people in equal measure. I think people came out of that, you know, hearing them talk and thinking oh um, I don’t like that bit but I quite like that bit, and that’s the whole point. It was to provoke thought and say actually, we are in charge of our destiny. It’s up to us how we build the algorithms what, you know, boundaries, we’ve put around them, where we want to use them where we don’t. I think it leads to the question, just because we could, doesn’t mean we should. And, therefore, I am hopeful that we are leading and that we are building a future that I will look forward to my daughter enjoying.

Suresh:

And is there any one particular incident that you predicted that you said would happen in 20 years that is already happening?

Gillian:

Oh, my goodness, uh, I am trying to remember back all the elements. Uh, I think, you know, a little bit about if you use wearables and you share it with your fitness instructor, they kind of know what you’ve been doing. Um, I think, a, there’s probably the driverless cars happening, some parts of the world, certainly more in trial format than real, uh, the one that I am really hopeful for because it was one that was even beyond 2037, was the creation of a 3D printing fully functioning organ for a heart transplant. Now, we are seeing huge amounts of 3D printing being used for prosthetics, being used to help surgeons practice, and for training purposes. Um, but we’re starting to see the glimmer of functioning sales and skin in 3D printing which is really quite exciting.

Suresh:

In fact, and this is one of the things that always fascinates me, we say that things when we look 20 years ahead and sometimes, we look back and look after 5 years and say a lot of this already happened and some things seem further away. I’m sure, if you relook at the talk again, you’ll find 10 more new things.

But, like I said, thank you very much for being on the show, Gillian Docherty, Officer of the Order of the British Empire, CEO of the Data Lab, computer scientist, and business leader, and a person who is helping transform a country with the use of data and AI. It’s been a pleasure having you on the show, thank you very much for joining us.

To our viewers and listeners, Slaves to the Algo is available on YouTube, Spotify, Apple Podcasts, and Google Podcasts. We release a new episode every Tuesday. If you enjoyed this week’s episode, I’m sure you did, if you want to find out the mystery of Scotch you got to look up Gillian and you got to find out about all her TED talks. Don’t forget to rate, share, subscribe. Stay safe in the age of COVID, stay relevant in the age of AI, and see you all next week. Once again, thank you, Gillian.

Gillian:

Thank you, Suresh.