House of Lords Member talks AI Ethics, Social Impact, and Governance

CXO Talk Jan 2021

What are the social, political, and government policy aspects of artificial intelligence? To learn more, we speak with Lord Tim Clement-Jones, Chairman of the House of Lords Select Committee on AI and advisor to the Council of Europe AI Committee.

What are the unique characteristics of artificial intelligence?

Michael Krigsman: Today, we're speaking about AI, public policy, and social impact with Lord Tim Clement-Jones, CBE. What are the attributes or characteristics of artificial intelligence that make it so important from a policy-making perspective?

Lord Tim Clement-Jones: I think the really key thing is (and I always say) AI has to be our servant, not our master. I think the reason that that is such an important concept is because AI potentially has an autonomy about it.

Brad Smith calls AI software that learns from experience. Well, of course, if software learns from experience, it's effectively making things up as it goes along. It depends, obviously, on the original training data and so on, but it does mean that it can do things of its own not quite volition but certainly of its own motion, which therefore have implications for us all.

Where you place those AI applications, algorithms (call them what you like) is absolutely crucial because if they're black boxes, humans don't know what is happening, and they're placed in financial services, government decisions over sentencing, or a variety of really sensitive areas then, of course, we're all going to be poorer for it. Society will not benefit from that if we just have this range of autonomous black box solutions. In a sense, that's slightly a rather dystopian way of describing it, but it's certainly what we're trying to avoid.

Michael Krigsman: How is this different from existing technologies, data, and analytics that companies use every day to make decisions and consumers don't have access to the logic and the data (in many cases) as well?

Lord Tim Clement-Jones: Well, of course, it may not be if those data analytics are carried out by artificial intelligence applications. There are algorithms that, in a sense, operate on data and come up with their own conclusions without human intervention. They have exactly the same characteristic.

The issue for me is this autonomy aspect, data analytics. If you've got actual humans in the loop, so to speak, then that's fine. We, as you know, have slightly tighter, well, considerably tighter, data protection in Europe (as a framework) for decision-making when you're using data. The aspect of consent or using sensitive data, a lot of that is covered. One has a kind of reassurance about that that there is, if you like, a regulatory framework.

But when it comes to automaticity, it is much more difficult because, at the moment, you don't necessarily have duties relating to the explainability of algorithms or the freedom from bias of algorithms, for instance, in terms of the data that's input or the decisions that are made. You don't necessarily have an overarching rule that says AI must be developed for human benefit and not, if you like, for human detriment.

There are a number of kinds of areas which are not covered by regulation. Yet, there are high-risk areas that we really need to think about.

Algorithmic decision-making and risks

Michael Krigsman: You focus very heavily on this notion of algorithmic decision-making. Please elaborate on that, what you mean by that, and also the concerns that you have.

Lord Tim Clement-Jones: Well, it's really interesting because, actually, quite a lot of the examples that one is trying to avoid come from the States. For instance, parole decisions or decisions in terms of artificial intelligence, that live facial recognition technology using artificial intelligence.

Sometimes, you get biased decision-making of a discriminatory nature in racial terms. That was certainly true in Florida with the COMPAS parole system. It's one of the reasons why places like Oakland, Portland, and San Francisco have banned live facial recognition technology in their cities.

Those are the kinds of aspects which you really do need to have a very clear idea of how you design these AI applications, what data you're putting in, how that data trains the algorithm, and then what the output is at the end of the day. It's trying to get some really clear framework for this.

You can call it an ethical framework. Many people do. I call it just, in a sense, a set of principles that you should basically put into place for, if you like, the overall governance or the design and for the use cases that you're going to use for the AI application.

Michael Krigsman: What is the nature of the framework that you use, and what are the challenges associated with developing that kind of framework?

Lord Tim Clement-Jones: I think one of the most important aspects is that this needs to be cross-country. This needs to be international. My desire, at the end of the day, is to have a framework which, in a sense, assesses the risk.

I am not a great regulator. I don't really believe that you've got to regulate the hell out of AI. You've got to basically be quite forensic about this.

You've got to say to yourself, "What are the high-risk areas that are in operation?" It could be things like live facial recognition. It could be financial services. It could be certain quite specific areas where there are high risks of infringement of privacy or decisions being made in a biased way, which have a huge impact on you as an individual or, indeed, on society because social media algorithms are certainly not free of issues to do with disinformation and misinformation.

Basically, it starts with an assessment of what the overall risk is, and then, depending on that level of risk, you say to yourself, "Okay, a voluntary code. Fine for certain things in terms of ethical principles applied."

But if the risk is a bit high, you say to yourself, "Well, actually, we need to be a bit more prescriptive." We need to say to companies and corporations, "Look, guys. You need to be much clearer about the standards you use." There are some very good international standard bodies, so you prescribe the kinds of standards, the design, an assessment of use case, audit, impact assessments, and so on.

There are certain other things where you say, "I'm sorry, but the risk of detriment, if you like, or damage to civil liberties," or whatever it may be, "is so high that, actually, what we have to have is regulation."

You say to yourself, then you have a framework. You say to yourself you can only use, for instance, live facial recognition in this context, and you must design your application in this particular way.

I'm a great believer in a graduation, if you like, of regulation depending on the risk. To me, it seems that we're moving towards that internationally. I actually believe that the new administration in the States will move forward in that kind of way as well. It's the way of the world. Otherwise, we don't gain public trust.

Trust and confidence in AI policy

Michael Krigsman: The issue of trust is very important here. Would you elaborate on that for us?

Lord Tim Clement-Jones: There are cultural issues here. One of the examples that we used in our original House of Lords report was GM foods. There's a big gulf, as you know, between the approach to GM foods in the States and in Europe.

In Europe, we sort of overreacted and said, "Oh, no, no, no, no, no. We don't like this new technology. We're not going to have it," and so on and so forth. Well, it was handled extremely badly because it looked as though it was just a major U.S. corporation that wanted to have its monopoly over seed production and it wasn't even possible for farmers to grow seed from seed and so on.

In a sense, all the messages were got wrong. There was no overarching ethical approach to the use of GM foods, and so on. We're determined not to get that wrong this time.

The reason why GM foods didn't take off in Europe was because, basically, the public didn't have any trust. They believed, if you like, an awful lot of (frankly) the myths that were surrounding GM foods.

It wasn't all myth. They weren't convinced of the benefit. Nobody really explained the societal benefits of GM foods.

Whether it would have been different, I don't know. Whether those benefits would have been seen to outweigh some of the dangers that people foresaw, I don't know. Certainly, we did not want this kind of approach to take place with artificial intelligence.

Of course, artificial intelligence is a much broader technology. A lot of people say, "Oh, you shouldn't talk about artificial intelligence. Talk about machine learning or probabilistic learning," or whatever it may be. But AI is a very useful, overall description in my view.

Michael Krigsman: How do you balance the competing interests, for example, the genetically modified food example you were just speaking about, the interest of consumers, the interest of seed producers, and so forth?

Lord Tim Clement-Jones: I think it's really interesting because I think you have to start with the data. You could have a set of principles. You could say that app developers need to look at the public benefit and so on and so forth. But the real acid test is the data that you're going to use to train the AI, the algorithm, whatever you may describe it as.

That's the point where there is this really difficult issue about what data is legitimate to extract from individuals. What data should be publicly valued and not sold by individual companies or the state (or whatever)? It is a really difficult issue.

In the States, you've had that brilliantly written book Surveillance Capitalism by Shoshana Zuboff. Now those raise some really important issues. Should an individual's behavioral data—not just ordinary personal data, but their behavioral data—be extractable and usable and treated as part of a data set?

That's why there is so much more discussion now about, well, what value do we attribute to personal data? How do we curate personal data sets? Can we find a way of not exactly owning but, certainly, controlling (to a greater extent) the data that we impart, and is there some way that we can extract more value from that in societal terms?

I do think we have to look a bit more. Certainly, in the UK, we've been very keen on what we call data trust or social data foundations, but institutions that hold data, public data; for instance, our national health service. Obviously, you have a different health service in the States, but data held by a national health service could be held in a data trust and, therefore, people would see what the framework for governance was. This would be actually very reassuring in many ways for people to see that their data was simply going to be used back in the health service or if it was exploited by third parties, that that was again for the benefit of the national health service: vaccinations, diagnosis of rare diseases, or whatever it may be.

It's really seeing the value of that data and not just seeing it as a commercial commodity that is taken away by a social media platform, for instance, and exploited without any real accountability. Arguing that terms and conditions do the job doesn't ever – I'm a lawyer, but I still don't believe that terms and conditions are adequate in those circumstances.

Decision-making about AI policy and governance

Michael Krigsman: We have a very interesting question from Arsalan Khan, who is a regular listener and contributor to CXOTalk. Thank you, Arsalan, always, for all of your great questions. His question is very insightful, and I think also relates to the business people who watch this show. He says, "How do you bring together the expertise (both in policymaking as well as in technology) so that you can make the right decisions as you're evaluating this set of options, choices, and so forth that you've been talking about?"

Lord Tim Clement-Jones: Well, there's no substitute for government coordination, it seems to me. The White House under President Obama had somebody who really coordinated quite a lot of this aspect.

There was, there has been, in the Trump White House, an AI specialist as well. I don't think they were quite given the license to get out there and sort of coordinate the effort that was taking place, but I'm sure, under the new administration, there will be somebody specifically, in a sense, charged with creating policy on AI in all its forms.

The States belongs to the Global Partnership on AI with Canada, France, UK, and so on. And so, I think there is a general recognition that governments have a duty to pull all this together.

Of course, it's a big web. You've got all those academic institutions, powerful academic institutions, who are not only researching into AI but also delivering solutions in terms of ethics, risk assessments, and so on. Then you've got all the international institutions: OECD, Council of Europe, G20.

Then at the national level, in the UK for instance, we've got regulators of data. We have an advisory body that advises on AI, data, and innovation. We have an office for AI in government.

We have The Alan Turing Institute, which pulls together a lot of the research that is being done in our universities. Now, unless somebody is sitting there at the center and saying, "How do we pull all this together?" it becomes extremely incoherent.

We've just had a paper from our competition authority on algorithms and the way that they may create consumer detriment in certain circumstances where they're misleading. For instance, on price comparison or whatever it may be.

Now, that is very welcome. But unless we actually boat that all into what we're trying to do across government and internationally, we're going to find ourselves with a set of rules and another set of rules there. Actually, trading across borders is difficult enough as it is, and we've got all the data shield and data adequacy issues at this very moment. Well, if we start having issues about inspection of the guts of an algorithm before an export can take place—because we're not sure that it's conforming to our particular set of rules in our country—then I think that's going to be quite tricky.

I'm a big fan of elevating this and making sure that, right across the board, we've got a common approach. That's why I'm such a big fan of this risk-based approach because I think it's common sense, basically, and it doesn't have one size fits all. I think, also, it means that, culturally, I think we can all get together on that.

Michael Krigsman: Is there a risk of not capturing the nuances because this is so complex and, therefore, creating regulation or even policy frameworks that are just too broad-brushed?

Lord Tim Clement-Jones: There is a danger of that but, frankly, I think, at the end of the day, whatever you say about this, there are going to be tools. I think regulation is going to happen at a sector level, probably.

I think that it's fair enough to be relatively broad-brushed across the board in terms of risk assessment and the general principles to be adopted in terms of design and so on. You've got people like the IEEE who are doing ethically aligned design standards and so on.

It's when it gets down to the sector level that I think then you get more specific. I don't think most of us would have too much objection to that. After all, alignment by sector.

For instance, the rules relating to financial services in the States (for instance in mergers, takeovers, and such) aren't very different to those in the UK, but there is a sort of competitive drive towards aligning your regulation and your regulatory rules, so to speak. I'd be quite optimistic that, actually, if we saw that (or if you saw that) there was one type of regulation in a particular sector, you'd go for it.

Automated vehicles, actually, is a very good example where regulation can actually be a positive driver of growth because you've got a set of standards that everybody can buy into and, therefore, there's business certainty.

How to balance competing interests in AI policy

Michael Krigsman: Arsalan Khan comes back with another question, a very interesting point, talking about the balancing of competing goals and interests. If you force open those algorithmic black boxes then do you run the risk of infringing the intellectual property of the businesses that are doing whatever it is that they're doing?

Lord Tim Clement-Jones: Regulators are very used to dealing with these sorts of issues of inspection and audit. I think that it would be perfectly fine for them to do that and they wouldn't be infringing intellectual property because they wouldn't be exploiting it. They're be inspecting but not exploiting. I think, at the end of the day, that's fine.

Also, don't forget; we've got this great concept now. The regulators are much more flexible than they used to be of sandboxing.

Michael Krigsman: How do you balance the interests of corporations against the public good, especially when it comes to AI? Maybe give us some specific examples.

Lord Tim Clement-Jones: For instance, we're seeing that in the online situation with social media. We've got this big debate happening, for instance, on whether or not it's legitimate for Twitter to delist somebody in terms of their account with them. No doubt, the same is true with Facebook and so on.

Now, maybe I shouldn't talk about it not being fair to a social media platform to have to make those decisions but—because of all the freedom of speech issues—I'd much prefer to see a reasonably clear set of principles and regulations that's about when social media platforms actually ought to delist somebody.

We're developing that in the UK in terms of Online Harms so that social media will have certain duties of care towards certain parts of the community, particularly young people and the vulnerable. They will have a duty to actually not delist or take off content or what has been called detoxing the algorithm. We're going to try and get a set of principles where people are protected and social media platforms have a duty, but it isn't a blanket and it doesn't mean that social media have to make freedom of speech decisions in quite the same way.

Inevitably, public policy is a balance and the big problem is ignorance. It's ignorance on the part of the social media platforms as to why we would want to regulate them and it's ignorance on the part of politicians who actually don't understand the niceties of all of this when they're trying to regulate.

As you know, some of us are quite dedicated to joining it all up so people really do understand why we're doing these things and getting the right solutions. Getting the right solution in this online area is really tricky.

Of course, at the middle of it, and this is why it's relevant to AI, is the algorithm, is the pushing of messages in particular directions which are autonomous. We're back to this autonomous issue, Michael.

Sometimes, you need to say, "I'm sorry." You need to be a lot more transparent about how this is working. It shouldn't be working in that way, and you're going to have to change it.

Now, I know that's a big, big change of culture in this area, but it's happening and I think that with the new administration, Congress, and so on, I think we'll all be on the same page very shortly.

Michael Krigsman: I have to ask you about the concentration of power that's taken place inside social media companies. Social media companies, many of them born in San Francisco, technology central, and so the culture of technology, historically, has been, "Well, you know, we create tools that are beneficial for everyone, and leave us alone," essentially.

Lord Tim Clement-Jones: Well, that's exactly where I'm coming from in terms of that culture has to change now. There is an exception, so I think that if you talk to the senior people in the social media companies and the big platforms, they will now accept that actually the responsibility of having to make decisions about delisting people and so on or what content should be taken down is not something they feel very comfortable about and they're getting quite a lot of heat as a result of it. Therefore, I think increasingly they will welcome regulation.

Now, obviously, I'm not predicating what kind of regulation is appropriate outside the UK or what would be accepted but, certainly, that is the way it's worked with us and there's a huge consensus across parties that we need to have a framework for the social media operations. That it isn't just Section 230, as you know, which sort of more or less allows anything to happen. In that sense, you don't take responsibility as a platform. Well, you know, not that we've ever accepted that in full in Europe but, in the UK, certainly.

Now, we think that it's time for social media platforms to take responsibility but recognizing the benefits. Good heavens. I tweet like the next person. I'm on LinkedIn. I'm no longer on Facebook. I took Jaron Lanier's advice.

There are platforms that are out there which are the Wild West. We've heard about Parler as well. We need to pull it together pretty quickly, actually.

Digital ethics: The House of Lords AI Report

Michael Krigsman: We have some questions from Twitter. Let's just start going through them. I love taking questions from Twitter. They tend to be great questions.

You created the House of Lords AI Report. Were there any outcomes that resulted from that? What did those outcomes look like?

Lord Tim Clement-Jones: Somebody asked me and said, "What was the least expected outcome?" I expected the government to listen to what we had to say and, by and large, they did.

To a limited extent, in terms of coordination, they haven't moved very fast on skills. Again, to touch on skills, they haven't moved nearly fast enough on skills.

They haven't moved fast enough on education and digital understanding, although, we've got a new kind of media literacy strategy coming down the track in the UK. Some of that is due to the pandemic but, actually, it's a question of energy and so on.

They've certainly done well in terms of the climate in terms of the search investment and in terms of the kind of nearer to market type of encouragement that they've given. So, I would score their card at about six out of ten. They've done well there.

They sort of said, "Yes, we accept your ethical AI, your trustworthy AI message," which was a core of what we were trying to say. They also accepted the diversity message. In fact, if I was going to say where they've performed best in terms of taking it on board, it's this diversity in the AI workforce, which I think is the biggest plus.

The really big plus has been the way the private sector in the UK has taken on board the messages about trustworthy AI, ethical AI. Now, techUK, which is our overarching trade body in the UK, they now have a regular annual conference about ethics and AI, which is fantastic. They're genuinely engaged.

In a sense, the culture of the app developer, the AI app developer, really encompasses ethics now. We don't have this kind of hypocritic oath for developers but, certainly, the expectations are that developers are much more plugged into the principles by which they are designing artificial intelligence. I think that will continue to grow.

The education role that techUK has played with their members has been fantastic and is a general expectation (across the board) by our regulators. We've reinforced each other, I think, probably, in that area, which I think has been very good because, let's face it, the people who are going to develop the apps are the private sector.

The public sector, by and large, procure these things. They've had sets of ethical principles now for procurement that they've put in place: World Economic Forum principles, data-sharing frameworks, and so on, or ethical data sharing frameworks.

Generally, I think we've seen a fair bit of progress. But we did point out in our just most recent report where they ran the risk of being complacent and we warned against that, basically.

Michael Krigsman: We have a really interesting question from Wayne Anderson. Wayne makes the point that it's difficult to define digital ethics at scale because of the competing interests across society that you've been describing. He said, "Who owns this decision-making, ultimately? Is it the government? Is it the people? How does it manifest? And who decides what AI is allowed to do?"

Lord Tim Clement-Jones: That's exactly my risk-based approach. It depends on what the application is. You do not want a big brother type government approach to every application of AI. That would be quite stupid. They couldn't cope anyway and it would just restrict innovation.

What you have to do—and this is back to my risk assessment approach—you have to say, "What are the areas where there's potential of detriment to the citizens, to the consumers, to society? What are those areas and then what do we do about them? What are the highest risks?"

I think that is a proportionate way of looking at dealing with AI. That is the way forward for me, and I think it's something we can agree on, basically, because risk is something that we understand. Now, we don't always get the language right, but that's something I think we can agree on.

Michael Krigsman: Wayne Anderson follows up with another very interesting question. He says, "When you talk about machine learning and statistical models, it's not soundbite friendly. To what degree is ignorance of the problem and the nature of what's going on and the media inflaming the challenges here?"

Lord Tim Clement-Jones: The narrative of AI is one of the most difficult and the biggest barriers to understanding: public understanding, understanding by developers, and so on.

Unfortunately, we're victims in the West of a sort of 3,000-year-old narrative. Kumar wrote about robots. Jason and the Argonauts had to escape from a robot walking around the Isle of Crete. That was 3,000 years ago.

It's been in our myths. We've had Frankenstein, the Prague Golem, you name it. We are frightened, societally existentially frightened by "other," by the "other," by alien creatures.

We think of AI as embedded in physical form, in robots, and this is the trouble. We've seen headlines about terminator robots.

For instance, when we launched our House of Lords report, we had headlines about House of Lords saying there must be an ethical code to prevent terminator robots. You can't get away from the narrative, so you have to double up and keep doubling up on the public trust in terms of the reassurance about the principles that are applied, about the benefits of AI applications, and so on.

This is why I raised the GM foods point because—let's face it—without much narrative about GM foods, they were called Frankenfoods. They didn't have a thousand years of history about aliens, but we do in AI, so the job is bigger.

Impact of AI on society and employment

Michael Krigsman: Any conversation around AI ethics must include a discussion of the economic impacts of AI on society and the displacement, worker displacement, and economic displacements that are taking place. How do we bring that into the mix?

Lord Tim Clement-Jones: There are different forecasts and we have to accept the fact that some people are very pessimistic about the impact on the workforce of artificial intelligence and others who are much more sanguine about it. But there are choices to be made.

We have been here before. If you look at 5th Avenue in 1903, what do you see? You see all horses. If you look at 5th Avenue in 1913, you see all carriages. I think you see one horse in the photograph.

This is something that society can adjust to but you have to get it right in terms of reskilling. One of the big problems is that we're not moving fast enough.

Not only is it about education in schools—which is not just scientific and technological education—it's about how we use AI creatively, how we use it to augment what we do, to add to what we do, not just simply substitute for what we do. There are creative ways we need to learn about in terms of using AI.

Then, of course, we have to recognize that we have to keep reinventing ourselves as adults. We can't just expect to have the same job for 30 years now. We have to keep adjusting to the technology as it comes along.

To do that, you can't just do it by yourself. You have to have—I don't know—support from government like a life-long learning account as if you were getting a university loan or grant. You've got to have employers who actually make the effort to make sure that their worker skills don't simply become obsolete. You've got to be on the case for that sort of thing. We don't want a kind of digital rustbelt in all of this.

We've got to be on the case and it's a mixture of educators, employers, government, and individuals, of course. Individuals have to have the understanding to know that they can't just simply take a job and be there forever.

Michael Krigsman: Again, it seems like there's this balancing that's taking place. For example, in the role of government in helping ease this set of economic transitions but, at the same time, recognizing that there will be pain and that individuals also have to take responsibility. Do I have that right, more or less?

Lord Tim Clement-Jones: Absolutely. I'm not a great fan of the government doing everything for us because they don't always know what they need to do. To expect government to simply solve all the problems with a wave of the financial wand, I think, is unreasonable.

But I do think this is a collaboration that needs to take place. We need to get our education establishment—particularly universities and further education in terms of pre-university colleges and, if you like, those developing different kinds of more practical skills—involved so that we actually have an idea about the kinds of skills we're going to need in the future. We need to continually be looking forward to that and adjusting our training and our education to that.

At the moment, I just don't feel we're moving nearly fast enough. We're going to wake up with a dreadful hangover (if we're not careful) with people without the right skills but the jobs can't be filled and, yet, we have people who can't get jobs.

This is a real issue. I'm not one of the great pessimists. I just think that, at any pace, we have a big challenge.

Michael Krigsman: We also need to talk about COVID-19. Where are you, in the UK, dealing with this issue? As somebody in the House of Lords, what is your role in helping manage this?

Lord Tim Clement-Jones: My job is to push and pull and kick and shove and try and move government on, but also be a bit of a hinge between the private sector, academia, and so on. We've got quite a community now of people who are really interested in artificial intelligence, the implications, how we further it to public benefit, and so on. I want to make sure that that community is retained and that government ministers actually listen to that community and are a part of that community.

Now, you know I get frustrated sometimes because government doesn't move as fast as we all want it to sometimes. Algorithmic decision-making in government, our government hasn't yet woken up to the need to have a fairly clear governance and compliance framework, but they'll come along. I'd love it if they were a bit faster, but I've still got enough energy to keep pushing them as fast as I can go.

Michael Krigsman: Any thoughts on what the post-pandemic work world will look like?

Lord Tim Clement-Jones: [Loud exhale] I mean this is the existential fret because, if you like, of the combination of COVID and the acceleration of remote working, particularly where lightbulbs have gone off in a lot of board rooms about what is possible now in terms of use of technology, which weren't there before. If we're not careful, and if people don't make the right decisions in those boardrooms, we're going to find substitution by technology of people taking place to quite a high degree without thinking about how the best combination between technology and humans work, basically. It's just going to be seen as, "Well, we can save costs and so on," without thinking about the human implications.

If I were going to issue any kind of gypsy's warning, that's what I'd say is that, actually, we're going to find ourselves in a double whammy after the pandemic because of new technology being accelerated. All those forecasts, actually, are going to come through quicker than we thought if we're not careful.

Michael Krigsman: Any final closing thoughts as we finish up?

Lord Tim Clement-Jones: I use the word "community" a fair bit, but what I really like about the world of AI (in all its forms) whatever we're interested in—skills, ethics, regulation, risk, development, benefit, and so on—is the fact that we're a tribe of people who like discussing these things, who want to see results, and it's international. I really do believe that the kind of conversation you and I have had today, Michael, is really important in all of this. We've got international institutions that are sharing all this.

The worst thing would be if we had a race to the bottom with AI and its principles. "Okay, no, we won't have that because that's going to damage our competitiveness," or something. I think I would want to see us collaborate very heavily, and they're used to that in academia. We've got to make sure that happens in every other sphere.

Michael Krigsman: All right. Well, a very fast-moving conversation. I want to say thank you to Lord Tim Clement-Jones, CBE, for taking time to be with us today. Thank you for coming back.

Lord Tim Clement-Jones: Pleasure. Absolute pleasure, Michael.

https://www.cxotalk.com/episode/house-lords-member-talks-ai-ethics-social-impact-governance


UK at risk without a national data strategy

Leading peers on the House of Lords Select Committee on Artificial Intelligence worry that the UK will not benefit or control AI as national data strategy is delayed.

By Mark Chillingworth

IDG Connect | MAR 21, 2021 11:30 PM PDT

The UK has no national data strategy, which places the businesses and citizens of the European country at risk, according to the chair of the House of Lords Select Committee on Artificial Intelligence (AI). A national data strategy was promised in the autumn of 2020, but the chair of the AI Select Committee says a government consultation programme that closed in December 2020 was too shallow to provide the UK with the framework needed to derive economic, societal and innovative benefit. 

“The National Data Strategy has been delayed and will report in small parts, which will not encourage debate,” says Lord William Wallace, a Cabinet spokesperson in the House of Lords, the second chamber of British politics. Lord Wallace and his fellow Liberal Democrat party peer Lord Tim Clement-Jones are at the forefront of a campaign within the corridors of British political power to get the National Data Strategy debated properly by those it will impact - UK businesses and citizens - and then in practice under the leadership of a UK government Chief Data Officer.  

“The questions in the consultation were closed in nature and very much suggested the government already had a view and did not want to encourage debate,” Lord Wallace adds. The current government, which has been in place since 2010, has been incredibly vocal over the last decade of the importance of data to the UK. “They talk of nothing else and set up bodies like NHSX, and Dominic Cummings was a big fan of data,” Wallace says of the former advisor to Vote Leave, the Conservative Party and Prime Minister Boris Johnson. Lord Tim Clement-Jones worries that the attitudes of Cummings - who was forced out of the government in late 2020 - have coloured the government’s approach to a national data strategy. “He treated data as a commodity, and if data is in the hands of somebody that sees it as a commodity, it will not be protected, and that is not good for society. Palantir has a very similar view; the data is not about citizen empowerment,” Lord Clement-Jones says of the US data firm that was working on a UK Covid-19 data store. 

“A small minority of politicians are following this issue, and the National Data Strategy is under the remit of the Department for Culture Media and Sport (DCMS), which is not the most powerful department in the Cabinet,” Lord Wallace says. 

In December, the House of Lords Select Committee on Artificial Intelligence published a report: AI in the UK: No Room for Complacency, which called for the establishment of a Cabinet Committee “to commission and approved a five-year strategy for AI…ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.”

Lord Clement-Jones says a Cabinet-level committee is vital due to the ad hoc status of the committee he chairs. In addition, the rate of AI growth requires the government to pay close attention to the detail and impact of AI. As the report revealed: “in 2015, the UK saw £245 million invested in AI. By 2018, this had increased to over £760 million. In 2019 this was £1.3 billion...It is being used to help tackle the COVID-19 pandemic, but is also being used to underpin facial recognition technology, deep fakes, and other ethically challenging uses.”

“One of the big issues for us is, where do you draw the line for public usage? AI raises lots of issues, and as a select committee, we are navigating the new world of converging technologies such as the Internet of Things, cloud computing and the issue of sovereignty. And we have seen in the last few months that this government will subordinate all sorts of issues to sovereignty,” Lord Clement-Jones says. Adding that as a result of the sovereignty debate, businesses on both sides of the channel have lost vital mutual benefits.

“You have to look at these issues incredibly carefully. If people are too cavalier about things, like the Home Office has been over work permits, then it's very concerning...Take the recent trade deal with Japan, it is not at all clear that UK health data is part of this deal, and the government is walking blindly into this stuff,” Lord Clement-Jones says.

Data adequacy between the UK and Europe ends in June 2021 and a number of CIOs report concerns about the loss of existing data standards and protocols with the UK’s largest trading partner. “Relationships between government and business are very poor,” the Lord adds.

Despite the attitude of “F**k business” from English Prime Minister Boris Johnson, Lord William Wallace says there is a vibrant debate about data and ethics amongst the UK business and technology community, which has to be harnessed because he says, data is not debated enough in politics or the mainstream media. “We only hear the lurid headlines about Cambridge Analytica and never the benefits this technology offers.”

Data did, momentarily, become mainstream during the worst periods of the pandemic, with local government and health agencies revealing that they were not being given full access to Covid-19 data by the central government. “The over-centralisation is very much part of the problem, we have not used public health authorities effectively, for example,” Lord Wallace says. He adds that how local and national governments collect and release data to one another needs to be discussed and addressed. “We have some really powerful combined authorities in the UK now, and their data is really granular,” adding that now GPs and local health bodies are in charge of the UK Covid vaccination programme, successful results are being delivered. Centralisation of the initial pandemic response in the UK has led to the highest death toll in Europe and one of the highest mortality rates in the world. 

Global standing

As the UK exited the European Union, there was a narrative from Boris Johnson that the UK’s trading future would be closely aligned with the USA, but with Johnson’s close ally Donald Trump losing the US presidential election in 2020, the two Lords wonder if Johnson can be so assured, especially when it comes to data, and they worry about the impact on British business. “The government don’t stop to look at where data flows,” Lord Clement Jones says of the poor business relationship leading to a poor understanding. On the USA, they believe the new Biden administration will have to move towards greater data protection. On the flip side of this, Lord Wallace points out that the government has been championing the UK’s role in the Five Eyes security services pact, but it is not clear if the USA security services is able to carry out mass data collection in the UK from the shared intelligence centre Menwith Hill in the UK, claiming that there is no written agreement between the two nations. 

It is for this reason the two Lords believe it is vital that the UK engages in a national debate about data’s benefits and public concerns. “The public are most scared about health data as it is the one they are most aware of, yet the debate about the government’s collection of data is absent from public debate,” Lord Wallace says. Lord Clement-Jones adds that he is concerned that there is a danger of public distrust growing. “So now it is about how do we create a debate so that we create a circular flow of data that benefits society and involves important and respected organisations like the Ada Lovelace Institute, Big Brother Watch and the Open Data Institute?”

“The UK remains an attractive place to learn, develop, and deploy AI. It has a strong legal system, coupled with world-leading academic institutions, and industry ready and willing to take advantage of the opportunities presented by AI,” concludes the Lord’s report AI in the UK: No Room for Complacency.

https://www.idgconnect.com/article/3611769/uk-at-risk-without-a-national-data-strategy.html


Lord Clement-Jones on protecting and valuing our healthcare data December 2020

Future Care Capital Guest blog December 2020

https://futurecarecapital.org.uk/latest/lord-clement-jones/

With the EU/UK negotiations on a knife edge, the recent conclusion of a UK/Japan trade agreement, consultation on a National Data Strategy and the current passage of a Trade Bill through parliament, data issues are front and centre of policy making.

NHS data in particular of course is a precious commodity especially given the many transactions between technology, telecoms and pharma companies concerned with NHS data. EY in a recent report estimated the value of NHS data could be around £10 billion a year in the benefit delivered.

The Department for Health and Social Care is preparing to publish its National Health and Care Data Strategy in the New Year, in which it is expected to prioritise the “Safe, effective and ethical use of data-driven technologies, such as Artificial Intelligence, to deliver fairer health outcomes”. Health professionals have strongly argued that free trade deals risk compromising the safe storage and processing of NHS data.

The objective must be to ensure that the NHS and not the US big tech companies and drug giants reap the benefit of all this data. Harnessing the value of healthcare data must be allied with ensuring that adequate protections are put in place in trade agreements if that value isn’t to be given or traded away.

There is also the need for data adequacy to ensure that personal data transfers to third countries outside the EU are protected, in line with the principles of the GDPR. Watering down the UK’s data protection legislation will only reduce the chances of receiving an adequacy decision.

"NHS data in particular of course is a precious commodity especially given the many transactions between technology, telecoms and pharma companies concerned with NHS data."

Lord Clement-Jones

There is also a concern that the proposed National Data Strategy will lead to the weakening of data protection legislation, just as it becomes ever more necessary for securing citizens’ rights. There should however be no conflict between good data governance and economic growth and better government through effective use of data.

The section of the Final Impact Assessment of the Comprehensive Economic Partnership Agreement between the UK and Japan, which deals with Digital trade provisions, says that the agreement “contains commitments to uphold world-leading standards of protection for individuals’ personal data, in line with the UK’s Data Protection Act 2018, when data is being transferred across borders. This ensures that both consumer and business data can flow across borders in a safe and secure manner.”

But the agreement has Article 8.3 a which appears to provide a general exception for data flows where this is “necessary to protect public security or public morals or to maintain public order or…… to protect human, animal or plant life or health”. So, the question has been raised whether this will override UK data protection law and give access to access to source code and algorithms.

To date there have been shortcomings in the sharing of data between various parts of the health service, care sector and civil service. The process of development of the COVID 19 App has not improved public trust in the Government’s approach to data use.

There is also a danger that the UK will fall behind Europe and the rest of the world unless it takes back control of its data and begins to invest in its own cloud capabilities.

Specifically, we need to ensure genuine sovereignty of NHS data and that it is monetized in a safe way focused on benefitting the NHS and our citizens.

With a new National Data Strategy in the offing there is now the opportunity for the government to maximize the opportunities afforded through the collection of data and position the UK as leader in data capability and data protection. We can do this and restore credibility and trust through:

  • Guaranteeing greater transparency of how patient data is handled, where it is stored and with whom and what it is being used for especially through vehicles such as data trusts and social data foundations
  • Appropriate and sufficient regulation that strikes the right balance between credibility, trust, ethics and innovation
  • Ensuring service providers that handle patient data operate within a tight ethical framework
  • Ensuring that the UK’s data protection regulation isn’t watered down as a consequence of Brexit or through trade agreements
  • Making the UK the safest place in the world to process and store data. In delivering this last objective there is a real opportunity for government to lead by example–not just the UK, but the rest of the world by developing its own sovereign data capability.

Retention of control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain the UK’s position as a leading life science economy and innovator and that is where as part of the new trade legislation being put in place clear safeguards are needed to ensure that in trade deals our publicly held data is safe from exploitation except as determined by our own government’s democratically taken decisions.


Lord Clement-Jones on Trustworthy Trade and Healthcare Data April

Future Care Capital Guest Blog April 2021

I’m an enthusiast for the adoption of new technology in healthcare but it is concerning when a body such as Axrem, which represents a number of health tech companies,  has said that while there is much interest in pilots and proof of concept projects, the broad adoption of AI is still problematic for many providers for reasons that include the fact that “some early healthcare AI projects have failed to manage patient data effectively, leading to scepticism and concern among professionals and the public.”

I share this concern – especially when we know that some big tech and big pharma companies seem to have a special relationship with the DHSC (Department for Health and Social Care) – and in the light of the fact that one of the new 10 government priorities is:-

“Championing free and fair digital trade: As an independent nation with a thriving digital economy, the UK will lead the way in a new age of digital trade. We will ensure our trade deals include cutting-edge digital provisions, as we did with Japan, and forge new digital partnerships and investment opportunities across the globe”

The question is what guarantee do we have that our health data will be used in an ethical manner, assigned its true value and used for the benefit of UK healthcare?

Back in April 2018 in our House of Lords AI Select Committee Report, ‘AI in the UK:  Ready Willing and Able?’ we identified the issue:

  1. The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped.

This received the bland government response:

“We will continue to work with ICO, NDG, regulatory bodies, the wider NHS and partners to ensure that appropriate regulatory frameworks, codes of conduct and guidance are available.”

Since then, of course, we have had a whole series of documents designed to reassure on NHS data governance:

But all lack assurance on the mechanisms for oversight and compliance.

Then in July last year the CDEI (Centre for Data Ethics and Innovation) published “Addressing trust in public sector data use” which gives the game away.  They said

“Efforts to address the issue of public trust directly will have only limited success if they rely on the well-trodden path of developing high-level governance principles and extolling the benefits of successful initiatives.

“While principles and promotion of the societal benefits are necessary, a trusted and trustworthy approach needs to be built on stronger foundations.  Indeed, even in terms of communication there is a wider challenge around reflecting public acceptability and highlighting the potential value of data sharing in specific contexts

So the key question is, what is actually happening in practice?

We debated this during the passage of both the Trade Bill and the Medicines and Medical Devices Bill and the results were not reassuring. In both bills we tried to safeguard state control of policy-making and the use of publicly funded health and care data as a significant national asset.

As regards the Japan/UK Trade Agreement for example the Government Minister said – when pressed – at Report Stage it “removes unjustified barriers to data flows to ensure UK companies can access the Japanese market and provide digital services.  It does this by limiting the ability for governments to put in place unjustified rules that prevent data from flowing and create barriers to trade.”

But as Lord Freyberg rightly said at the time, there is widespread recognition that the NHS uniquely controls nationwide longitudinal healthcare data, which has the potential to generate clinical, social and economic development as well as commercial value. He argued that the Government should take steps to protect and harness the value of that data and, in the context of the Trade Bill, ensure that the public can be satisfied that that value will be safeguarded and, where appropriate, ring-fenced and reinvested in the UK’s health and care system.

On a Medicines Bill debate in January, Lord Bethell employed an extraordinarily circular argument:

“It is important to highlight that we could only disclose information under this power where disclosure is required in order to give effect to an international agreement or arrangement concerning the regulation of human medicines, medical devices or veterinary medicines. In that regard, the clause already allows disclosure only for a particular purpose.  As international co-operation in this area is important and a good, even necessary, thing, such agreements or arrangements would be in the public interest by default.

So, it is clear we still do not have adequate provisions regarding the exploitation internationally of health data, which according to a report by EY, could be around £10 billion a year in the benefit delivered.

We were promised the arrival of a National Health and Care Data Strategy last autumn. In the meantime, trade agreements are made, Medicine Bills are passed, and we have little transparency about what is happening as regards NHS data – especially in terms of contracts with companies like Palantir and Amazon.

The Government is seeking to champion the free flow of data almost as an ideology. This is clear from the replies we received during the Trade and Medicines and Medical Devices Bills and indeed a recent statement by John Whittingdale, the Minister for Media and Data. He talks about the:

“…UK’s new, bold approach to international data transfers”,

Our international strategy will also explore ways in which we can use data as a strategic asset in the global arena and improve data sharing and innovation between our international partners.”

and finally…

Our objective is for personal data to flow as freely and as safely as possible around the world, while maintaining high standards of data protection.”

What do I prescribe?

At the time when these issues were being debated, I received an excellent briefing from Future Care Capital which proposed that “Any proceeds from data collaborations that the Government agrees to, integral to any ‘replacement’ or ‘new’ trade deals, should be ring-fenced for reinvestment in the health and care system, pursuant with FCC’s long-standing call to establish a Sovereign Health Fund.”

This is an extremely attractive concept. Retaining control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain its position as a leading life science economy and innovator.

Furthermore, with a new National Data Strategy in the offing there is now the opportunity for the government to maximize the opportunities afforded through the collection of data and position the UK as leader in data capability and data protection.

We can do this and restore credibility and trust through guaranteeing greater transparency of how patient data is handled, where it is stored and with whom and what it is being used for, especially through vehicles such as data trusts and social data foundations.

As the Understanding Patient Data and Ada Lovelace report ‘Foundations of Fairness’ published in March 2020 said:

Public accountability, good governance and transparency are critical to maintain public confidence.  People care about NHS data and should be able to find out how it is used. Decisions about third party access to NHS data should go through a transparent process and be subject to external oversight.”

This needs to go together with ensuring:

  • Appropriate and sufficient regulation that strikes the right balance between credibility, trust, ethics and innovation;
  • service providers that handle patient data operate within a tight ethical framework;
  • that the UK’s data protection regulation is not watered down as a consequence of Brexit or through trade agreements; and
  •  the UK develops its own sovereign data capability to process and store data.

To conclude

As the report “NHS Data: Maximising its impact on the health and wealth of the United Kingdom” last February from Imperial College’s Institute of Health Innovation said:

Proving that NHS and other health data are being used to benefit the wider public is critical to retaining trust in this endeavour.”

At the moment that trust is being lost.

Lord Clement-Jones was made CBE for political services in 1988 and a life peer in 1998. He is the Liberal Democrat House of Lords spokesperson for Digital (2017-), previously spokesperson on the Creative Industries (2015-17). He is the former Chair of the House of Lords Select Committee on Artificial Intelligence which sat from 2017 to 2018 and Co-Chairs the All-Party Parliamentary Group on AI. Tim is a founding member of the OECD Parliamentary Group on AI and a member of the Council of Europe’s Ad-hoc Committee on AI (CAHAI). He is a former member of the House of Lords Select Committees on Communications and the Built Environment. Currently, he is a member of the House of Lords Select Committee on Risk Assessment and Risk Planning. He is a Consultant of global law firm DLA Piper where previous positions held included London Managing Partner (2011-16), Head of UK Government Affairs, Chairman of its China and Middle East Desks, International Business Relations Partner and Co-Chairman of Global Government Relations. He is Chair of Ombudsman Services Limited, the not for profit, independent ombudsman service providing dispute resolution for the communications, energy, property and copyright licensing industries. He is Chair of Council of Queen Mary University of London and Chairs the Advisory Council of the Institute for Ethical AI in Education, led by Sir Anthony Seldon. He is a Senior Fellow of the Atlantic Council’s GeoTech Center which focusses on technology, altruism, geopolitics and competition. He is President of Ambitious About Autism, an autism education charity and school.

https://futurecarecapital.org.uk/latest/guest-blog-lord-clement-jones/

How the OECD’s AI system classification work added to a year of progress in AI governance

Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.

Lord C-J OECD Blog Jan 2021

 BackAI Wonk blogOECD Network of Experts on AIOECD Parliamentary Group on AIGlobal Partnership on AIGlobal Partnership on AIAI policy eventsSignup to our mailing listAI Principles BackOECD AI Principles overviewInclusive growth, sustainable development and well-beingHuman-centred values and fairnessTransparency and explainabilityRobustness, security and safetyAccountabilityInvesting in AI research and developmentFostering a digital ecosystem for AIShaping an enabling policy environment for AIBuilding human capacity and preparing for labour market transformationInternational co-operation for trustworthy AIPolicy areas BackPolicy areas overviewAgricultureCompetitionCorporate governanceDevelopmentDigital economyEconomyEducationEmploymentEnvironmentFinance and insuranceHealthIndustry and entrepreneurshipInnovationInvestmentPublic governanceScience and technologySocial and welfare issuesTaxTradeTransportWork, Innovation, Productivity & Skills programmeClassificationAI performanceSkillsDiffusionLabour marketsTrends & data BackTrends & data overviewOECD metrics & methodsAI newsAI researchAI jobs & skillsAI search trendsLive COVID-19 researchCountries & initiatives BackCountries & initiatives overviewNational strategies & policiesStakeholder initiativesOECD.orgGoing Digital ToolkitAbout BackAbout OECD.AINetwork of expertsThe AI WonkResources on AIPartnersFAQVideosSearch

  1. Home
  2. The AI Wonk
  3. How the OECD’s AI system classification work added to a year of progress in AI governance

Government

How the OECD’s AI system classification work added to a year of progress in AI governance

Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.

Lord Tim Clement-Jones

Lord, House of Lords

January 6, 2021 — clock7 min read

LinkedIn logo
Twitter logo
Facebook logo
AI in 2020

It has never been clearer, particularly after this year of COVID and our ever greater reliance on digital technology, that we need to retain public trust in the adoption of AI.

To do that we need, whilst realizing the opportunities, to mitigate the risks involved in the application of AI. This brings with it the need for a clear standard of accountability.

A year of operationalizing AI ethical principles

2019 was the year of the formulation of high-level ethical principles for AI by the OECDEU and G20. These are very comprehensive and provide the basis for a common set of international standards but it has become clear that voluntary ethical guidelines are not enough to guarantee ethical AI.

There comes a point where the risks attendant on non-compliance with ethical principles is so high that policy makers need to understand when certain forms of AI development and adoption require enhanced governance or and/or regulation. The key factor in 2020 has been the work done at international level in the Council of EuropeOECD and EU towards operationalizing these principles in a risk-based approach to regulation.

And they have been very complementary. The Council of Europe’s Ad Hoc Committee on AI (CAHAI) has drawn up a Feasibility Study for regulation of AI which advocates a risk-based approach to regulation as does last year’s EU White Paper on AI.

As the EU White Paper said: “As a matter of principle, the new regulatory framework for AI should be effective to achieve its objectives while not being excessively prescriptive so that it could create a disproportionate burden, especially for SMEs. To strike this balance, the Commission is of the view that it should follow a risk-based approach”

They go on to say:

“A risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question whether or not they are ‘high-risk’ . The determination of what is a high-risk AI application should be clear and easily understandable and applicable for all parties concerned.”

The feasibility study develops this further with discussion about the nature of the risks particularly to fundamental rights, democracy and the rule of law.

As the Study says: “These, risks, however, depend on the application context, technology and stakeholders involved. To counter any stifling of socially beneficial AI innovation, and to ensure that the benefits of this technology can be reaped fully while adequately tackling its risks, the CAHAI recommends that a future Council of Europe legal framework on AI should pursue a risk-based approach targeting the specific application context. This means not only that the risks posed by AI systems should be assessed and reviewed on a systematic and regular basis, but also that any mitigating measures …should be specifically tailored to these risks.”

This means not only that the risks posed by AI systems should be assessed and reviewed on a systematic and regular basis, but also that any mitigating measures …should be specifically tailored to these risks.
EU White Paper on AI

Governance must match the level of risk

Nonetheless, it is a complex matter to assess the nature of AI applications and their contexts. The same goes for the consequent risks of taking this forward into models of governance and regulation. If we aspire to a risk-based regulatory and governance approach we need to be able to calibrate the risk. This will in turn determine the necessary level of control.

Given this kind of calibration, there is a clear governance hierarchy to follow, depending on the rising risk involved. Where the risk is lower, actors can adopt a flexible approach such as a voluntary ethical code without a hard compliance mechanism. Where the risk is higher, they will need to institute enhanced corporate governance using business guidelines and standards, with clear disclosure and compliance mechanisms.

Then we have government best practice, such as the AI procurement Guidelines developed by the World Economic Forum and adopted by the UK government. Finally, and, as some would say, as a final resort, we introduce comprehensive regulation, such as that which is being adopted for autonomous vehicles, which is enforceable by law.

In regulating, developers need to be able to take full advantage of regulatory sandboxing which permits the testing of a new technology without the threat of regulatory enforcement but with strict overview and individual formal and informal guidance from the regulator.

There are any number of questions which arise in considering this governance hierarchy, but above all, we must ask ourselves if we have the necessary tools for risk assessment and a clear understanding of the necessary escalation in compliance mechanisms to match.

As has been well illustrated during the COVID pandemic, the language of risk is fraught with misunderstanding.  When it comes to AI technologies, we need to assess the risks such as the likely impact and probability of harm, the importance and sensitivity of use of data, the application within a particular sector, the risk of non-compliance and whether a human in the loop mitigates risk to any degree. 

AI systems classification framework at the OECD

The detailed and authoritative classification work carried out by the OECD Network of Experts Working Group on the Classification of AI systems comes at a crucial and timely point.

The preliminary classification framework of AI systems comprises 4 key pillars:

  1. Context: This refers to who is deploying the AI system and in what environment. This includes several considerations such as the business sector, the breadth of deployment, the system maturity, the stakeholders impacted and the overall purpose, such as for profit or not for profit.
  2. Data and Input: This refers to the provenance of the data the system uses, where and by whom it has been collected, the way it evolves and is updated, its scale and structure and whether it is public or private or personal and its quality.
  3. The AI Model, i.e. the underlying particularities that make up the AI system – is it, for instance, a neural network or a linear model? Supervised or unsupervised? A discriminative or generative model, probabilistic or non-probabilistic? How does it acquire its capabilities? From rules or machine learning? How far does the AI system conform to ethical design principles such as explainability and fairness?
  4. The Task and Output: This examines what the AI System actually does. What are the outputs that make up the results of its work? Does it forecast, personalize, recognize, or detect events for example?

Within the Context heading, the framework includes consideration of the benefits and risks to individuals in terms of impact on human rights, wellbeing and effects on infrastructure and how critical sectors function. To fit with the CAHAI and EU risk-based approach and be of maximum utility however, this should really be an overarching consideration after all the other elements have been assessed.

Also see: A first look at the OECD’s Framework for the Classification of AI Systems, designed to give policymakers clarity

The fundamental risks of algorithmic decision-making

One of the key questions, of course, is whether on this basis of this kind of classification and risk assessment there are early candidates for regulation.

The Centre for Data Ethics and Innovation created in the UK two years ago recently published their AI Barometer Report. This also discusses risk and regulation and found a common core of risk across sectors.

They say “While the top-rated risks varied from sector to sector, a number of concerns cropped up across most of the contexts we examined. This includes the risks of algorithmic bias, a lack of explainability in algorithmic decision-making, and the failure of those operating technology to seek meaningful consent from people to collect, use and share their data.”

A good example of where some of these issues have already arisen is the use of live facial recognition technologies which is becoming widespread. It is unusual for London’s Metropolitan Police Commissioner to describe a new technology as Orwellian (in reference to his seminal novel “1984” where he coined the phrase “Big Brother”) as she did last year talking about live facial recognition but now they are beginning to adopt it at scale.

In addition, over the past few years we have seen a substantial increase in the adoption of Algorithmic Decision Making and prediction, or ADM, across central and local government in the UK. In criminal justice and policing, algorithms for prediction and decision making are already in use.

Another high-risk AI technology which needs to be added to the candidates for regulation is the use of AI applications for recruitment processes as well as in situations impacting employees’ rights to privacy.

Future decision-making processes in financial services may be considered high risk and become candidates for regulation. This concerns areas such as credit scoring or determining insurance premiums by AI systems. 

AI risk and regulation in 2021 and beyond

The debate over hard and soft law in this area is by no means concluded. Denmark and a number of other EU member states have recently felt the need to put a stake in the ground with what is called a non-paper to the EU Commission over concerns that AI and other digital technologies may be overregulated in the EU’s plans for digital regulation.

Whether in the public or private sector, the cardinal principle must be that AI needs to be our servant not our master. Going forward, there is cause for optimism that experts, policy makers and regulators now recognize that there are varying degrees of risk in AI systems. We can classify and calibrate AI and develop the appropriate policies and solutions to ensure safety and trust. We can all as a result expect further progress in 2021.



https://oecd.ai/wonk/contributors/lord-tim-clement-jones


No Room for Complacency: Making ethical artificial intelligence a reality

OECD Blog Feb 2021

https://www.oecd-forum.org/posts/no-room-for-complacency-making-ethical-artificial-intelligence-a-reality

This article is part of a series in which OECD experts and thought leaders — from around the world and all parts of society — address the COVID-19 crisis, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.

Join the Forum Network for free using your email or social media accounts to share your own stories, ideas and expertise in the comments.


In April 2018, the House of Lords AI Select Committee I chaired produced its report AI in the UK: Ready, Willing and Able?, a special enquiry into the United Kingdom’s artificial intelligence (AI) strategy and the opportunities and risks afforded by it. It made a number of key recommendations that we have now followed up with a short supplementary report, AI in the UK: No Room for Complacency, which examines the progress made by the UK Government, drawing on interviews with government ministers, regulators and other key players in the AI field. 

Since the publication of our original report, investment in, and focus on the United Kingdom's approach to artificial intelligence has grown significantly. In 2015, the United Kingdom saw GBP 245 million invested in AI. By 2018, this had increased to over GBP 760 million. In 2019, it was GBP 1.3 billion.

The UK Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency.

Artificial intelligence has been deployed in the United Kingdom in a range of fields—from agriculture and healthcare, to financial services, through to customer service, retail, and logistics. It is being used to help tackle the COVID-19 pandemic,and is also being used to underpin facial recognition technology, deep fakes and other ethically challenging uses.

Our conclusion is that the UK Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency.

There are many bodies outside the framework of government that are to a greater or lesser extent involved in an advisory role: the AI Council, the Centre for Data Ethics and Innovation, the Ada Lovelace Institute and the Alan Turing Institute.

Co-ordination between the various bodies involved with the development of AI, including the various regulators, is essential. The UK Government needs to better co-ordinate its AI policy and the use of data and technology by national and local government.

A Cabinet Committee must be created; their first task should be to commission and approve a five-year strategy for AI. This strategy should prepare society to take advantage of AI, rather than feel it is being taken advantage of.

In our original report, we proposed a number of overarching principles providing the foundation for an ethical standard of AI for industry, government, developers and consumers. Since then, a clear consensus has emerged that ethical AI is the only sustainable way forward.

The United Kingdom is a signatory of the OECD Recommendation on AI, embodying five principles for responsible stewardship of trustworthy AI and the G20 non-binding principles on AI.This demonstrates the United Kingdom's commitment to collaborate on the development and use of ethical AI, but it is yet to take on a leading role.

The time has come for the UK Government to move from deciding what the ethics are, to how to instill them in the development and deployment of AI systems. We say that our government must lead the way on making ethical AI a reality. To not do so would be to waste the progress it has made to date, and to squander the opportunities AI presents for everyone in the United Kingdom.

We call for the Centre for Data Ethics and Innovation to establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks: one for the ethical development of AI, including issues of prejudice and bias; and the other for the ethical use of AI by policymakers and businesses. 

However, we have concluded that the challenges posed by the development and deployment of AI cannot necessarily be tackled by cross-cutting regulation. Understanding by users and policymakers needs to be developed through a better understanding of risk—and how it can be assessed and mitigated in terms of the context in which it is applied—so our sector-specific regulators are best placed to identify gaps in regulation.

AI will become embedded in everything we do. As regards skills, government inertia is a major concern. The COVID-19 pandemic has thrown these issues into sharp relief. As and when the COVID-19 pandemic recedes, and the UK Government addresses the economic impact of it, the nature of work will have changed and there will be a need for different jobs and skills.

This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. 

The Government needs to take steps so the digital skills of the United Kingdom are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.

It is clear that the pace, scale and ambition of government action does not match the challenge facing many people working in the United Kingdom. It will be imperative for the Government to move much more swiftly. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.

The question at the end of the day remains whether the United Kingdom is still an attractive place to learn about and work in AI. Our ability to attract and retain the top AI research talent is of paramount importance, and it will therefore be hugely unfortunate if the United Kingdom takes a step back, with the result that top researchers will be less willing to come here.

The UK Government must ensure that changes to the immigration rules promote—rather than obstruct—the study, research, and development of AI.

Go to the profile of Lord Tim Clement-Jones

Lord Tim Clement-Jones

Former Chair of House of Lords Select Committee on AI / Co-Chair of APPG on AI, House of Lords, United KingdomLord Clement-Jones was made CBE for political services in 1988 and life peer in 1998. He is Liberal Democrat, House of Lords spokesperson for Digital. He is former Chair of the House of Lords Select Committee on AI which sat from 2017-18; Co-Chair of the All-Party Parliamentary Group (“APPG”) on AI; a founding member of the OECD Parliamentary Group on AI and member of the Council of Europe’s Ad-hoc Committee on AI (“CAHAI”). He is a former member of the House of Lords Select Committees on Communications and the Built Environment; and current member of the House of Lords Select Committee on Risk Assessment and Risk Planning. He is Deputy-Chair of the APPG on China and Vice-Chair of the APPG’s on ‘The Future of Work’ and ‘Digital Regulation and Responsibility’. He is a Consultant of DLA Piper where previous positions include London Managing Partner, Head of UK Government Affairs and Co-Chair of Global Government Relations. He is Chair of Ombudsman Services Limited, the not for profit, independent ombudsman providing dispute resolution for communications, energy and parking industries. He is Chair of Council of Queen Mary University London; Chair of the Advisory Council of the Institute for Ethical AI in Education and Senior Fellow of the Atlantic Council’s GeoTech Center.


Why data trusts could help us better respond and rebuild from COVID-19 globally

Lord C-J April 2020

What are data trusts? What roles can Data Trusts play in the global response to COVID-19? What can the U.S. learn from the U.K.’s activities involving data trusts and AI? Please join the Atlantic Council’s GeoTech Center on Wednesday, April 15 at 12:30pm EDT, for a discussion with Lord Tim Clement-JonesDame Wendy Hall, and Dr. David Bray on the role of Data Trusts in the global response to and recovery from COVID-19. This discussion will include discussing data and AI activities occurring in the United Kingdom and what the other countries can learn from these efforts.

Please join us for this important conversation. You can register to receive further information on how to join the virtual audience via Zoom or watch the video live streamed on this web page. If you wish to join the question and answer period, you must join by the Zoom app or web after registering. 

https://www.youtube.com/watch?v=CyGYDAxyVbk
https://www.atlanticcouncil.org/event/why-data-trusts-could-help-us-better-respond-and-rebuild-from-covid19-globally/

The geopolitics of digital identity: Dr. David Bray and Lord Tim Clement-Jones

July 2020

Throughout the course of the COVID-19 pandemic, technologists have pointed out how digital identity systems could remedy some of the difficulties that we face as an open society suddenly unable to interact face-to-face. Even those who previously did not consider themselves to be “digital natives” have been forced to adopt a digital lifestyle, one in which traditional sources of identification and trust-building have become less useful.

Lord Tim Clement-Jones, a Nonresident Senior Fellow with the GeoTech Center, and Dr. David Bray, Director of the Geotech Center, discussed the issue of digital identity in a recent event at the IdentityNorth Summit. Lord Jones pointed out how technologies for securely connecting an individual’s digital presence to their identity are not new, but have yet to be applied at a national scale, or in a universal manner that would be necessary to maximize their impact. He recognized, though, that certain applications of digital identity technology might be of concern to ordinary people; though he might be comfortable using his digital identity as part of the United Kingdom Parliament’s new system for MPs to vote, the average citizen might take concern with their votes being tabulated digitally, or being connected to other facets of their online identity.

As a result, the experts emphasized how digital identity, in whatever forms it will take, needs to be inclusive of all individuals and experiences, regardless of, for example, their level of literacy or digital accessibility. Though analog identity systems are by no means perfect, to protect from identity theft and misuse of digital identity systems, initial pilot programs similar to the Canadian system in development will need to roll out a hybrid of both physical and digital forms of identity.

Watch the video above to hear more of Lord C-J’s commentary on what precautions must be taken to enable the success of digital identity in a post-COVID-19 world.

https://www.atlanticcouncil.org/insight-impact/in-the-news/the-geopolitics-of-digital-identity-dr-david-bray-and-lord-tim-clement-jones/
https://www.atlanticcouncil.org/insight-impact/in-the-news/the-geopolitics-of-digital-identity-dr-david-bray-and-lord-tim-clement-jones/

The UK's Role In The Future Of AI

Kathleen Walch Contributor COGNITIVE WORLD Contributor Group AI

Forbes Magazine April 2020

The UK has played an important role in the history and development of AI. Alan Turing, a British mathematician, is considered to be the father of theoretical computer science and has deep roots in AI as well.  In addition to crafting the foundations for modern computing, Turing envisioned the Turing test, which aims to determine a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. 

While the UK was heavily involved in AI development from the very first years, the UK also helped bring about the first AI Winter in the industry as well. The Lighthill report cast a deep shadow on AI’s promises and caused a sharp pullback in funding from the government, research institutions, and universities. The report represented a pessimistic view of AI and was highly critical of many core aspects of research in this field.

However, with the resurgence of interest and investment in AI, the UK has likewise been making heavy investments in AI, and as a result continues to show its strength in the field. In a recent report by research firm Cognilytica, the United Kingdom has one of the strongest AI strategies in the world with strong government funding for AI, strong research activity in the field, strong VC funding and AI startups, and strong enterprise activity and adoption of AI. (Disclosure: I’m a principal analyst at Cognilytica). So where is the UK heading with regards to its overall investment and support of AI? 

Parliament's Role in AI

The AI Today podcast interviewed Lord Tim Clement-Jones, Co-Chair of the All Party Parliamentary Group on Artificial Intelligence and former chair of the House of Lords Artificial Intelligence Select Committee. In 2017 the UK established an All Party Parliamentary Group on Artificial Intelligence to address ethical issues, industrial norms, regulatory options and social impact for AI in Parliament. Despite AI’s history with periods of little interest and funding, the times have changed. According to Lord Clement-Jones, he believes that AI is finally here to stay, which is why he set out to learn more about the future of AI with some of his peers. In doing this, he has ended up as a bit of an expert on the topic, and is now publicly speaking about what AI could mean for all of us. ADVERTISING

Artificial intelligence is changing fast, and with it, we must consider what might be coming from the future of the use of this technology. Despite the fact that many people assume Silicon Valley is where the majority of development is being carried out, the reality is that AI is being developed all around the globe. In Cognilytica’s above mentioned report the countries leading the way include the United States, United Kingdom, France, and Israel, with China, South Korea, Germany, and many other countries very close behind according to a range of facts. AI is being pursued by both governments and businesses alike, which means that there is some serious potential for unexpected breakthroughs but also makes it nearly impossible to know what it might look like in the future. 

Lord Tim Clement-Jones thinks that AI has the power to do some amazing things with the broad spectrum that it covers and the fact that it can be applied to many aspects of life as well as just about every single industry. However just exactly how it can and will be applied also makes it a bit difficult to regulate. He is relatively concerned with the ethics of this technology and how we can best go about creating and using AI ethically. The UK has designated itself as a hopeful leader in ethical AI development, but the concept of ethical and responsible AI is still new and relatively nascent. Lord Tim Clement-Jones stresses this is an area that we will need some sort of global agreement in the long run. 

International adoption of AI standards and ethics

Lord Tim Clement-Jones thinks a set of criteria that researchers, developers, and those building AI should agree to follow in order to find an ethical way to continue designing AI is incredibly important. Trying to implement specific ideas can help developers to create AI that is more helpful than harmful. He focuses on the idea that AI should be beneficial, transparent, unbiased, and not destructive. He believes that if we hold true to these ideals in design, we can create AI that is useful for society but does not put anyone at risk of a disadvantage.

One thing that he is particularly worried about is this notion that if people become fearful of the technology, they will ultimately stifle the innovation. It is his hope that by placing an emphasis on creating ethically designed AI systems, people will feel more comfortable with it being used. In fact some organizations such as the OECD have created a set of AI principles that were adopted by member countries including the UK to help create international guidelines for all to follow.

Some people are concerned that AI will be taking their jobs. What we’ve seen is that AI is not a job killer, but a job category killer. Lord Tim Clement-Jones believes that if we can focus on how AI can help citizens and help society there should be no real reason to fear this technology. A big point of concern with AI technology is this notion that artificial intelligence will replace the need for humans. Lord Tim Clement-Jones believes that this will only be a concern if companies put a focus on productivity over actual business transformation. There are plenty of jobs and tasks that AI can take over, particularly ones focused around busywork. 

However, that does not mean that there will necessarily be less jobs overall. He believes that the industry will create new and different jobs and that the world will rise to meet the occasion.In fact, we’ve seen this happen with other transformative technologies as well. If anything, his big area of concern is the potential impact that it might have with on-the-job training and learning. While it is true that technology can make some jobs and tasks more efficient, it can also cut into training time from employees who used to spend that time connecting and learning from their more educated peers. For example, technology and AI are helping law firms by taking on certain tasks, however by not having junior lawyers performing these tasks it takes away opportunities for them to learn. However, if we can meet the training and on-the-job learning needs through other means, this should not need to be a huge problem. 

Another area of potential concern that would arise is the potential for negative outcomes due to dependencies on AI technologies. He points out that airplane pilots are now less than pleased with the fact that the cockpit is mostly an automated experience, meaning they don’t necessarily spend much time using their skills in flight. Because of that, it has the potential to create a knowledge gap or even just allow educated individuals to get rusty. When you consider the fact that these employees only need their skills in the event that something goes wrong, it is easy to see how a frightening scenario might play out. If a person who is skilled does not regularly use and exercise on those skills until the worst possible moment when they suddenly become necessary, it is possible for there to be rather disastrous outcomes.

As a whole though, Lord Tim Clement-Jones believes that the future of AI is bright. He stresses the idea that AI can do so much good for us and help us to improve the quality of our world. There are endless potential benefits with this technology. However, because of the potential for abuse and the raw power of these systems, we simply must take steps to ensure that this remains an ethical process. For now, it seems obvious that AI is transformative technology that will widely impact a range of industries, governments, and society as a whole. As we move forward, it will take many conversations between countries and businesses alike to ensure that it is a bright one.

https://www.forbes.com/sites/cognitiveworld/2020/04/12/the-united-kingdoms-role-in-the-future-of-ai/?sh=7a154382768d


The rise of AI marks an opportunity for radical changes in corporate governance

Lord C-J NSTech Jan 2020

There is currently a great deal of concern in Britain and the EU more widely about the implications of the adoption of artificial intelligence (AI), particularly in algorithmic decision making and prediction in the public sector, notably in policing and the criminal justice system, and in the use of live facial recognition technology in public places.

As a result there has been pressure to set out much clearer guidelines, beyond general ethical codes, for the use of these technologies by government and its agencies.

But even if we get things right in the public sector, businesses have responsibility too, both those who develop AI and those who adopt it. AI even in its narrow form will and should have a profound impact on and implications for corporate governance generally.

Trade organisations such as TechUK and specific AI organisations such as the Partnership on AI (comprised of major tech companies and NGO’s) recognise that corporate responsibility and governance on AI is increasingly important.

There is a growing corpus of corporate governance work relating to AI and the ethics of its application in business.  Asset managers such Hermes and Fidelity are now adopting guidance for the companies they invest in.

The Institute of Business Ethics’s report  “Corporate Ethics in a Digital Age” is a masterly briefing for boards written by Peter Montagnon, formerly chair of the IBA investment Committee, who sadly died the week after its launch.

But he has left a very important piece of work behind him together with the vital message that boards should be in control and accountable when it comes to applying AI in their business and they should have the skillsets to enable them to do so.

The Tech Faculty of the ICAEW has produced a valuable paper on New Technologies, Ethics and Accountability. The bottom line is we need to operationalize the ethics and engrain ethical behavior.They have set out a number of questions which boards should be asking themselves.

It is imperative that boards have the right skill sets in order to fulfil their oversight role. For instance do they understand what technology is being used in their company and how it is being used and managed, for example by HR in recruitment and assessment? Have they strong lines of accountability for the introduction and impact of AI?

Boards need to be aware of the questions they should ask and the advice they need and from whom. They need to consider what tools they have available such as

  • Algorithm impact assessments/ algorithm assurance
  • Risk Assessment /Ethical Audit Mechanisms/Kitemarking
  • Ethics by design, metrics, standards for “training testing and fixing”

Risk management is central to the introduction of new technology. Does a company mainstream oversight into its Audit and Risk committee or set up an Ethics Advisory Board? It has even been suggested  by Christian Voegtlin, associate professor in corporate social responsibility at Audencia Business School that there should be a chief philosophy officer to ensure adherence to ethical standards.

Is an AI adopting business taking full advantage of the rapidly growing concept of regulatory sandboxing? This means a regulator such as our Financial Conduct Authority permitting the testing of a new technology without the threat of regulatory enforcement but with strict overview and individual formal and informal guidance from the regulator.

Some make an analogy with the application of professional medical ethics. We take these for granted but should individual AI engineers be explicitly required to declare their adherence to a set of ethical standards along the lines of a new tech Hippocrates Oath? This could apply to both AI adopters as well as developers.

More broadly and more significantly, however, AI can and should contribute positively to a purposeful form of capitalism which is not simply the pursuit of profit but where companies deploy AI in an ethical way, to achieve greater sustainability and a fairer distribution of power and wealth.

We have seen the high level sets of AI ethics developed by bodies like the EU, OECD, the G20, the Partnership on AI .These are very comprehensive and provide the basis for a common set of international standards.

In the words of the title of Brent Mittelstadt’s recent Nature paper however, “Principles Alone cannot guarantee ethical AI”. We need to develop alongside them a much more socially responsible form of corporate governance.

Dr Maha Hosain Aziz in her recent book “Future World Order” talks of the need for a new social contract between tech companies and citizens. I think we need to go further however.

It is not just the tech companies where the issues identified by Rana Foroohar in “Don’t be Evil The Case Against Big Tech”are relevant. It also extends to: “Digital property rights, privacy laws, antitrust rules, free speech, the legality of surveillance, the implications of data for economic competitiveness and national security, the impact of the algorithmic disruption of work on labor markets, the ethics of artificial intelligence and the health and well being of users of digital technology.”

As Foroohar says, “[when] we think about how to harness the power of technology for the public good, rather than the enrichment of a few companies, we must make sure that the leaders of those companies aren’t the only ones to have a say in what the rules are.”

The Big Innovation Centre has played a leading role in the debate with its “Purposeful Company Project”, which was launched back in 2015 with an ethos that “the role of business is to fulfil human wants and needs and to pursue a purpose that has a clear benefit to society. It is through the fulfilment of their chosen purpose that value is created.”

Since then, it has produced several important reports on the need for an integrated regulatory approach to stewardship and intrinsic purpose definition, and on the changes that should be made to the Financial Reporting Council’s UK Stewardship Code.

With all the potential opportunities and disruption involved with AI, this work is now absolutely crucial to ensure that businesses don’t adopt new technologies without a strong underlying set of corporate values so that it is not just shareholders who benefit but that the impact and distribution of benefit to employees and society at large are fully considered.

We of course can’t confine these ethical challenges to the UK. We need ethical alignment in a global world. I hope we will both adopt the international principles which have been developed and, by the same token, argue for the international adoption of the purposeful company principles we are developing in the UK.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI.

https://tech.newstatesman.com/business/ai-corporate-governance
https://tech.newstatesman.com/business/ai-corporate-governance