Digital ID: What’s the current state-of-play in the UK?
On 22 July, as part of the #DigitalID2021 event series, techUK hosted an insightful discussion exploring the current state-of-play for digital identity in the UK and how to build public trust in digital identity technologies. The panel also examined how the UK’s progress on digital ID compares with international counterparts and set out their top priorities to support the digital identity market and facilitate wider adoption.
The panel included:
- Lord Tim Clement-Jones, House of Lords Spokesperson for Digital for the Liberal Democrats
- Margaret Moore, Director of Citizen & Devolved Government Services, Sopra Steria (Chair)
- Julie Dawson, Director of Regulatory and Policy, Yoti
- Laura Barrowcliff, Head of Strategy, GBG Plc
You can watch the full webinar here or read our summary of the key insights below:
The UK’s progress on digital identity
Opening the session, the panel discussed progress around digital identity since the start of the pandemic.
Julie Dawson raised a number of developments that indicate steps in the right direction. Before the pandemic over 3.5m EU citizens proved their settled status via the EU Settlement Scheme, whilst the JMLSG and Land Registry have both since explicitly recognised digital identity, with digital right to work checks and a Home Office sandbox on age verification technologies in alcohol sales also introduced since March last year. She also lauded the creation of the Digital Regulation Cooperation Forum as a great example of joining up across government departments, such as on the topic of age assurance.
Lord Tim Clement-Jones on the other hand noted that the pace of change has remained slow. He said that the UK government needs to take concrete action and should focus on opening up government data to third party providers. He also made the point that the u-turn on the Digital Economy Act Part 3 has not as yet been rectified and so the manifesto pledge to protect children online has still to be fulfilled. Julie pointed out that legislative change in terms of the Mandatory Licensing Conditions are still needed, to enable a person to prove their age to purchase alcohol without solely requiring a physical document with a physical hologram.
Collaboration across industry around digital identities was also highlighted by Julie, drawing upon the example of the Good Health Pass Collaborative which has emerged since the start of the pandemic. The Collaborative has brought together a variety of stakeholders and over 130 companies to work on an interoperable digital identity solution to facilitate international travel post-COVID to operate at scale once more.
Examining the Government alpha Trust Framework and latest consultation
Moving on to look at the government’s alpha Trust Framework for digital identity, as well as the newly published consultation on digital identity and attributes, the panel explored what these documents do well and what gaps ultimately remain.
Julie Dawson and Laura Barrowcliff both saw a lot of good in the new proposals, with Laura highlighting how the priorities in the government’s approach around governance, inclusion and interoperability broadly hit on the right points. Julie also highlighted the role for vouching in the government’s framework as a positive step and emphasised the government’s recognition of the importance of parity for digital identity verification as one of the most central developments for wider adoption of the technology.
Providing a more cautious view, Lord Tim Clement-Jones said the UK risked creating a byzantine pyramid of governance on digital identity. He pointed to the huge number of bodies envisaged to have roles in the UK system and raised concerns that the UK will end up with a certification scheme that differs from anyone else’s internationally by not using existing standards or accreditation systems.
Looking forward, Julie highlighted that providers are looking for clarity on how to operate and deliver over the next 18 months before any of these documents become legislation. She also expressed the sincere hope that the progress made in terms of offering digital Right to Work checks, alongside physical ones, will continue rather than end in September 2021.
She identified two separate ‘tracks’ for public and private sector use of digital identity and raised the need for a conversation on when and how to join these up with the consumer at the heart. When considering data sources, for example, the ability of digital identity providers to access data across the Passport Office, the DVLA and other government agencies and departments is critical to support the development of digital identity solutions.
The panel was pleased to see the creation of a new government Digital Identity Strategy Board which they hoped would drive progress but raised the need for further transparency about ongoing work in this space, including a list of members, TOR and meeting minutes from these sessions.
Public trust in digital identity
One of the core topics of conversation centred upon trust in digital identity technologies and what steps can be taken to drive wider public trust in this space.
Lord Tim Clement-Jones said that there is a key role for government on standards to ensure digital identity providers are suitable and trustworthy, as well as in providing a workable and feasible proposal that inspires public confidence.
Julie highlighted how, alongside the Post Office, Yoti welcomed the soon to be published research undertaken by OIX into documents and inclusion.
Laura Barrowcliff emphasised the importance of context for public trust, putting the consumer experience at the heart of considerations. Opening up digital identity and consumer choice is one such way of improving the experience for users. Whilst much of the discussion on trust ties in with concerns around fraud, Laura highlighted how digital identity can actually help from a security and privacy perspective by embodying principles such as data minimisation and transparency. She also highlighted how data minimisation and proportionate use of digital identity data could be key for user buy-in.
Lessons from around the world
Looking to international counterparts, the panel drew attention to countries around the world which have made good progress on digital identity and key learnings from these global exemplars.
The progress on digital identity made in Singapore and Canada was mentioned by Julie Dawson, who emphasised the openness around digital identity proposals – which span the public and private sector – and the work being done to keep citizens informed and involve them in the process.
Julie also raised the example of the EU, which is accelerating its work on digital identity with an approach that also spans the public and private sector and is looking at key issues such as data sources whilst focusing on the consumer. Lord Tim Clement-Jones emphasised the importance of monitoring Europe’s progress in this area and the need for the UK government to consider how its own approach will be interoperable internationally.
Panellists discussed the role digital identities have played in Estonia where 99% of citizens hold digital ID and public trust in digital identities is the norm. However, they recognised key differences between the UK and Estonia. In the UK, digital identity solutions are developing in the context of widespread use of physical identification documents, whereas digital identities were the starting point in Estonia.
Beyond the EU, Laura said that GBG has a digital identity solution in Australia where the market for reusable identities is accelerating rapidly. She highlighted that working with private sector companies who have the necessary infrastructure and capabilities in place is critical to drive adoption.
Priorities for digital identity
Drawing the discussion to a close, each of the panellists were asked for their top priority to support public trust and the growth of the digital identity market in the UK.
Transparency was identified as Julie Dawson’s top priority, particularly around what discussions are happening within and across government departments and on the work of the Strategy Board.
Lord Tim-Clement Jones highlighted data and trustworthy data-sharing as key. He said he hopes to see the formation of data foundations and trusts of publicly held information that is properly curated to be used or shared on the basis of set standards and rules, which should spill over into the digital identity arena.
Laura Barrowcliff said simplicity is most important, keeping things simple for those working in the ecosystem as well as for consumers, with those consumers at the heart of all decision-making processes.
Britain should be leading the global conversation on tech
It's been clear during the pandemic that we're increasingly dependent on digital technology and online solutions. The Culture Secretary recently set out 10 tech priorities. Some of these reflected in the Queen's Speech, but how do they measure up and are they the right ones?
First, we need to roll out world-class digital infrastructure nationwide and level up digital prosperity across the UK.
We were originally promised spending of £5bn by 2025 yet only a fraction of this - £1.2 billion - will have been spent by then. Digital exclusion and data poverty has become acute during the pandemic. It's estimated that some 1.8 million children have not had adequate digital access. It's not just about broadband being available, it's about affordability too and that devices are available.
Unlocking the power of data is another priority, as well as championing free and fair digital trade.
We recently had the government’s response to the consultation on the National Data Strategy. There is some understanding of the need to maintain public trust in the sharing and use of their data and a welcome commitment to continue with the work started by the Open Data Institute in creating trustworthy mechanisms such as data institutions and trusts to do so. But recent events involving GP held data demonstrate that we must also ensure public data is valued and used for public benefit and not simply traded away. We should establish a Sovereign Health Data Fund as suggested by Future Care Capital.
"The pace, scale and ambition of government action does not match the upskilling challenge facing many people working in the UK"
We must keep the UK safe and secure online. We need the “secure by design” consumer protection provisions now promised. But the draft Online Safety Bill now published is not yet fit for purpose. The problem is what's excluded. In particular, commercial pornography where there is no user generated content; societal harms caused for instance by fake news/disinformation so clearly described in the Report of Lord Puttnam’s Democracy and Digital Technologies Select Committee; all educational and news platforms.
Additionally, no group actions can be brought. There's no focus on the issues surrounding anonymity/know your user, or any reference to economic harms. Most tellingly, there is no focus on enhanced PHSE or the promised media literacy strategy - both of which must go hand-in-hand with this legislation. There's also little clarity on the issue of algorithmic pushing of content.
It’s vital that we build a tech-savvy nation. This is partly about digital skills for the future and I welcome greater focus on further education in the new Skills and Post-16 Education Bill. But the pace, scale and ambition of government action does not match the upskilling challenge facing many people working in the UK, as Jo Johnson recently said.
The need for a funding system that helps people to reskill is critical. Non-STEM creative courses should be valued. Careers' advice and adult education needs a total revamp. Apprentice levy reform is overdue. The work of Local Digital Skills Partnerships is welcome, but they are massively under-resourced. Broader digital literacy is crucial too, as the AI Council in their AI Roadmap pointed out. As is greater diversity and inclusion in the tech workforce.
We must fuel a new era of start-ups and scaleups and unleash the transformational power of tech and AI.
The government needs to honour their pledge to the Lords' Science and Technology Committee to support catapults to be more effective institutions as a critical part of innovation strategy. I welcome the commitment to produce a National AI Strategy, which we should all contribute to when the consultation takes place later this year.
We should be leading the global conversation on tech, the recent G7 Digital Communique and plans to host the Future Tech Forum, but we need to go beyond principles in establishing international AI governance standards and solutions. G7 agreement on a global minimum corporation tax rate bodes well for OECD digital tax discussions.
At the end of the day there are numerous notable omissions. Where is the commitment to a Bill to set up the new Digital Markets Unit, or tackling the gig economy in the many services run through digital applications? The latter should be a major priority.
Lord C-J: Government must resolve AI ethical issues in the Integrated Review
The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Integrated Review.
The House Live April 2021
The Lords recently debated the government’s Integrated Review set out in “Global Britain in a Competitive Age”. The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Review. So, I hope that the promised AI Strategy this autumn and a Defence AI Strategy this May will flesh these out, resolve some of the contradictions and tackle a number of key issues. Let me mark the government’s card in the meantime.
Commercialisation of our R&D in the UK is key but can be a real weakness. The government need to honour their pledge to the Science and Technology Committee to support Catapults to be more effective institutions as a critical part of Innovation strategy. Access to finance is also crucial. The Kalifa Review of UK Fintech recommends the delivery of a digital finance package that creates a new regulatory framework for emerging technology. What is the government’s response to his creative ideas? The pandemic has highlighted the need for public trust in data use.
"The pandemic has highlighted the need for public trust in data use"
Regarding skills, the nature of work will change radically and there will be a need for different jobs and skills. There is a great deal happening on high-end technical specialist skills. Turing Fellowships, Phd’s, conversion courses, An Office for Talent, a Global Talent Visa etc. As the AI Council Roadmap points out, the government needs to take steps to ensure that the general digital skills and digital literacy of the UK are brought up to speed. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
Building national resilience by adopting a whole-of-society approach to risk assessment is welcome but in this context the government should heed the recent Alan Turing Institute report which emphasizes that access to reliable information particularly online is crucial to the ability of a democracy to coordinate effective collective action. New AI applications such as GPT3 the language generation system, can readily spread and amplify disinformation. How will the Online Safety legislation tackle this?
At the heart of building resilience must lie a comprehensive cyber strategy but the threat in the digital world is far wider than cyber. Hazards and threats can become more likely because of the development of technologies like AI and the transformations it will bring and how technologies interconnect to amplify them.
A core of our resilience is of course defence capability. A new Defence Centre for Artificial Intelligence is now being formed to accelerate adoption and a Defence AI strategy is promised next month. Its importance is reinforced in the Defence Command Paper, but there is a wholly inadequate approach to the control of lethal autonomous weapon systems or LAWS. Whilst there is a NATO definition of “automated” and “autonomous”, the MOD has no operative definition of or LAWS. Given that the most problematic aspect – autonomy – has been defined is an extraordinary state of affairs given that the UK is a founding member of the AI Partnership for Defence, created to “provide values-based global leadership in defence for policies and approaches in adopting AI.”
The Review talks of supporting the effective and ethical adoption of AI and data technologies and identifying international opportunities to collaborate on AI R&D ethics and regulation. At the same time, it talks of the limits of global governance with “competition over the development of rules, norms and standards.” How do the two statements square? We have seen the recent publication of the EU’s proposed Legal Framework for the risk-based regulation of AI. Will the government follow suit?
Regarding data, the government says it wants to see a continuing focus on interoperability and to champion the international flow of data and is setting up a new Central Digital and Data Office. But the pandemic has highlighted the need for public trust in data use. Will the National Data Strategy (NDS) recognize this and take on board the AI Council’s recommendations to build public trust for use of public data, through competition in data access, and responsible and trustworthy data governance frameworks?
Lord Clement-Jones is a Liberal Democrat member of the House of Lords, former Chair of the Lords Select Committee on AI and co-chair the APPG on AI.
We Need a Legal and Ethical Framework for Lethal Autonomous Weapons
As part of a recent Defence Review, our Prime Minister has said that the UK will invest another £1.5 billion in military research and development designed to master the new technologies of warfare and establish a new Defence Centre for AI. The head of the British Army, recently said that he foresees the army of the future as an integration of “boots and bots”.
The Government however have not yet explained how legal and ethical frameworks and support for personnel engaged in operations will also change as a consequence of the use of new technologies, particularly autonomous weapons, which could be deployed by our armed forces or our allies.
The final report of the US National Security Commission on Artificial Intelligence, published this March however considered the use of autonomous weapons systems and risks associated with AI-enabled warfare and concluded that “The U.S. commitment to IHL” - international humanitarian law - “is long-standing, and AI-enabled and autonomous weapon systems will not change this commitment.”
The UN Secretary General, António Guterres goes further and argues: “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. Yet we still have no international limitation agreement.
In company with a former Secretary of State for Defence and a former Chief of Defence Staff I recently argued in Parliament for a review of how legal and ethical frameworks need to be updated in response to novel defence technologies. This is my speech in which I pointed out the slow progress being made by the UK Government in addressing these issues.
In a written response subsequent to the debate, the Minister stated that whilst there is a NATO definition of “automated system” and “autonomous system”, the UK Ministry of Defence has no operative definition of Lethal Autonomous Weapon Systems or "LAWS". Given that the most problematic aspect – autonomy - HAS been defined that is an extraordinary state of affairs.
A few years ago, I chaired the House of Lords Select Committee on AI which considered the economic, ethical and social implication of advances in artificial intelligence. In our Report published in April 2018 entitled ‘AI in the UK: Ready, willing and able’ we addressed the issue of military use of AI and stated that 'perhaps the most emotive and high stakes area of AI development today is its use for military purposes’ recommending that this area merited a ‘full inquiry on its own.’ (para 334)
As the Noble Lord Browne of Ladyton has made plain, regrettably, it seems not to have yet attracted such an inquiry or even any serious examination. I am therefore extremely grateful to the Noble Lord for creating the opportunity to follow up on some of the issues we raised in connection with the deployment of AI and some of the challenges we outlined.
It’s also a privilege to be a co-signatory with the Noble and Gallant Lord HAWTON Houghton too who has thought so carefully about issues involving the human interface with military technology.
The broad context of course, as the Noble Lord Browne has said, are the unknowns and uncertainties in policy, legal and regulatory terms that new technology in military use can generate.
His concerns about complications and the personal liabilities to which it exposes deployed forces are widely shared by those who understand the capabilities of new technology. All the more so in a multinational context where other countries may be using technology which either we would not deploy or the use of which could create potential vulnerabilities for our troops.
Looking back to our Report, one of the things that concerned the Committee more than anything else was the grey area surrounding the definition of lethal autonomous weapon systems or LAWS.
As the Noble Lord Browne has said as the Committee: explored the issue, we discovered that the UK’s then definition which included the phrase “An autonomous system is capable of understanding higher-level intent and direction" was clearly out of step with the definitions used by most other governments and imposed a much higher threshold on what might be considered autonomous.
This allowed the government to say “the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all”
Our committee concluded that, ”In practice, this lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry”.
This was particularly in the light of the fact that at the UN’s Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) in 2017 the UK had opposed the proposed international ban on the development and use of autonomous weapons.
We therefore recommended that the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world.
The Government in their response to the Committee’s Report in June 2018 however replied: The Ministry of Defence “has no plans to change the definition of an autonomous system”.
It did say however: “The UK will continue to actively participate in future GGE meetings, trying to reach agreement at the earliest possible stage.”
Later, thanks to the Liaison Committee we were able - on two occasions last year - to follow up on progress in this area.
On the first occasion, in reply to the Liaison Committee’s letter of last January which asked “What discussions have the Government had with international partners about the definition of an autonomous weapons system, and what representations have they received about the issues presented with their current definition?”
the government replied:
“There is no international agreement on the definition or characteristics of autonomous weapons systems. HMG has received some representations on this subject from Parliamentarians ……” and has discussed it during meetings of the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), an international forum which brings together expertise from states, industry, academia and civil society.
The GGE is yet to achieve consensus on an internationally accepted definition and there is therefore no common standard against which to align. As such, the UK does not intend to change its definition.”
So no change there my lords until later in the year……. December 2020 when the Prime Minister announced the creation of the Autonomy Development Centre to “accelerate the research, development, testing, integration and deployment of world-leading artificial intelligence and autonomous systems”
In the follow up Report “AI in the UK: No Room for Complacency” published in the same month, we concluded: “We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.”
The response to this last month was a complete about turn by the Government. They said:
“We agree that the UK must be able to participate in international debates on autonomous weapons, taking an active role as moral and ethical leader on the global stage, and we further agree the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies.
“In recent years the MOD has subscribed to a number of definitions of autonomous systems, principally to distinguish them from unmanned or automated systems, and not specifically as the foundation for an ethical framework. On this aspect, we are aligned with our key allies.
Most recently, the UK accepted NATO’s latest definitions of “autonomous” and “autonomy”, which are now in working use within the Alliance. The Committee should note that these definitions refer to broad categories of autonomous systems, and not specifically to LAWS. To assist the Committee, we have provided a table setting out UK and some international definitions of key terms.”
The NATO definition sets a much less high bar as to what is considered autonomous: “A system that decides and acts to accomplish desired goals within defined parameters, based on acquired knowledge and an evolving situational awareness, following an optimal but potentially unpredictable course of action.”
The Government went on to say: “The MOD is preparing to publish a new Defence AI Strategy and will continue to review definitions as part of ongoing policy development in this area.”
Now, I apologize for taking my Noble Lords at length through this exchange of recommendation and response but if nothing else it does demonstrate the terrier-like quality of Lords Select Committees in getting responses from government.
This latest response is extremely welcome. But in the context of the Lord Brown’s amendment and the issues we have raised we need to ask a number of questions now: What are the consequences of the MOD’s fresh thinking?
What is the Defence AI Strategy designed to achieve. Does it include the kind of enquiry our Select Committee was asking for?
Now that we subscribe to the common NATO definition of LAWS will the Strategy in fact deal specifically with the liability and international and domestic legal and ethical framework issues which are central to this amendment?
If not my Lords, then a review of the type envisaged by this amendment is essential.
The final report of the US National Security Commission on Artificial Intelligence referred to by the Noble Lord Browne has for example taken a comprehensive approach to the issues involved. The Noble Lord has quoted three very important conclusions and asked whether the government agrees in respect of our own autonomous weapons.
There are three further crucial recommendations made by the Commission:
“The United States must work closely with its allies to develop standards of practice regarding how states should responsibly develop, test, and employ AI-enabled and autonomous weapon systems.”
And “The United States should actively pursue the development of technologies and strategies that could enable effective and secure verification of future arms control agreements involving uses of AI technologies.”
And of particular importance in this context “countries must take actions which focus on reducing risks associated with AI enabled and autonomous weapon systems and encourage safety and compliance with IHL (international Humanitarian Law) when discussing their development, deployment, and use”.
Will the Defence AI Strategy or indeed the Integrated Review undertake as wide an enquiry? Would it come to the same or similar conclusions?
My Lords, the MOD it seems has moved some way to getting to grips with the implications of autonomous weapons in the last three years. If it has not yet considered the issues set out in the amendment, it clearly should, as soon as possible, update the legal frameworks for warfare in the light of new technology, or our service personnel will be at considerable legal risk. I hope it will move further in response to today’s short debate.
COVID-19, Artificial Intelligence and Data Governance: A Conversation with Lord Tim Clement-Jones
BIICL June 2020
https://youtu.be/sABSaAkkyrI
This was the first in a series of webinars on 'Artificial Intelligence: Opportunities, Risks, and the Future of Regulation'.
In light of the COVID-19 outbreak, governments are developing tracing applications and using a multitude of data to mitigate the spread of the virus. But the processing, storing, use of personal data and the public health effectiveness of these applications require public trust and a clear and specific regulatory context.
The technical focus in the debate on the design of the applications - centralised v. decentralised, national v. global, and so on - obfuscates ethical, social, and legal scrutiny, in particular against the emerging context of public-private partnerships. Discussants focused on these issues, considering the application of AI and data governance issues against the context of a pandemic, national responses, and the need for international, cross border collaboration.
Lord Clement-Jones CBE led a conversation with leading figures in this field, including:
Professor Lilian Edwards, Newcastle Law School, the inspiration behind the draft Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates;
Carly Kind, Director of The Ada Lovelace Institute, which published the rapid evidence review paper Exit through the App Store? Should the UK Government use technology to transition from the COVID-19 global public health crisis;
Professor Peter Fussey, Research Director of Advancing human rights in the age of AI and the digital society at Essex University's Human Rights Centre;
Mark Findlay, Director of the Centre for Artificial Intelligence and Data Governance at Singapore Management University, which has recently published a position paper on Ethics, AI, Mass Data and Pandemic Challenges: Responsible Data Use and Infrastructure Application for Surveillance and Pre-emptive Tracing Post-crisis.
The event was convened by Dr Irene Pietropaoli, Research Fellow in Business & Human Rights, British Institute of International and Comparative Law.
Regulating artificial intelligence: Where are we now? Where are we heading?
By Annabel Ashby, Imran Syed & Tim Clement-Jones on March 3, 2021
https://www.technologyslegaledge.com/author/tclementjones/
Hard or soft law?
That the regulation of Artificial intelligence is a hot topic is hardly surprising. AI is being adopted at speed, news reports frequently appear about high-profile AI decision-making, and the sheer volume of guidance and regulatory proposals for interested parties to digest can seem challenging.
Where are we now? What can we expect in terms of future regulation? And what might compliance with “ethical” AI entail?
High-level ethical AI principles were made by the OECD, EU and G20 in 2019. As explained below, great strides were made in 2020 as key bodies worked to capture these principles in proposed new regulation and operational processes. 2021 will undoubtedly keep up this momentum as these initiatives continue their journey into further guidance and some hard law.
In the meantime, with regulation playing catch-up with reality (so often the case where technological innovation is concerned), industry has sought to provide reassurance by developing voluntary codes. While this is helpful and laudable, regulators are taking the view that more consistent, risk-based regulation is preferable to voluntary best practice.
We outline the most significant initiatives below, but first it is worth understanding what regulation might look like for an organisation using AI.
Regulating AI
Of course the devil will be in the detail, but analysis of the most influential papers from around the globe reveals common themes that are the likely pre-cursers of regulation. It reveals that conceptually, the regulation of AI is fairly straightforward and has three key components:
- setting out the standards to be attained;
- creating record keeping obligations; and
- possible certification following audit of those records, which will all be framed by a risk-based approach.
Standards
Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.
Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.
Records
To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.
Certification
A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.
Risk-based approach
Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.
Key initiatives and Proposals
Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:
OECD
The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.
Europe
Several significant proposals were published by key institutions in 2020.
In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.
In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.
Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.
United Kingdom
The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.
Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.
The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.
The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.
Regulators
Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.
The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.
Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.
2021 and beyond
In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.Print:EmailTweetLikeLinkedIn
Tackling the algorithm in the public sector
Constitution Society Blog Lord C-J March 2021
Lord Clement-Jones CBE is the House of Lords Liberal Democrat Spokesperson for Digital and former Chair of the House of Lords Select Committee on Artificial Intelligence (2017-2018).
https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/
Algorithms in the public sector have certainly been much in the news since I raised the subject in a house of Lords debate last February. The use of algorithms in government – and more specifically, algorithmic decision-making – has come under increasing scrutiny.
The debate has become more intense since the UK government’s disastrous attempt to use an algorithm to determine A-level and GCSE grades in lieu of exams, which had been cancelled due to the pandemic. This is what the FT had to say last August after the OFQUAL Exam debacle, where students were subjected to what has been described as unfair and unaccountable decision-making over their A-level grades:
‘The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure….’
It concluded: ‘Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts…. As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes.’
Over the past few years, we have seen a substantial increase in the adoption of algorithmic decision-making and prediction, or ADM, across central and local government. An investigation by the Guardian in late 2019 showed some 140 local authorities out of 408 surveyed, and about a quarter of police authorities, were now using computer algorithms for prediction, risk assessment and assistance in decision-making in areas such as benefit claims, who gets social housing and other issues – despite concerns about their reliability. According to the Guardian, nearly a year later that figure had increased to half of local councils in England, Wales and Scotland; many of them without any public consultation on their use.
Of particular concern are tools such as the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias, discrimination and dubious predictions.
Central government use is even more opaque but we know that HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services.
A key example of ADM use in central government is the DWP’s much criticised Universal Credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group study ‘The Computer Says No’ shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.
The Joint Council for the Welfare of Immigrants (JCWI) and campaigning organisation Foxglove joined forces last year to sue the Home Office over an allegedly discriminatory algorithmic system – the so called ‘streaming tool’ – used to screen migration applications. This is the first, it seems, successful legal challenge to an algorithmic decision system in the UK, although before having to defend the system in court, the Home Office decided to scrap the algorithm.
The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system two years ago and said in a statement afterwards: ‘Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.’
Overseas the use of algorithms is even more extensive and, it should be said, controversial – particularly in the US. One such system is the NYPD’s Patternizr, a tool that the NYPD has designed to identify potential future patterns of criminal activity. Others include Northpointe’s COMPAS risk assessment programme in Florida and the InterRAI care assessment algorithm in Arkansas.
It’s not that we weren’t warned, most notably in Cathy O’Neil’s Weapons of Math Destruction (2016) and Hannah Fry’s Hello World (2018), of the dangers of replication of historical bias in algorithmic decision making.
It is clear that failure to properly regulate these systems risks embedding bias and inaccuracy. Even when not relying on ADM alone, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.
Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report ‘AI in the UK: Ready Willing and Able?’ in 2018. We said at the time: ‘We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.’
It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a ‘right to an explanation’ provision, when an individual has been subject to fully automated decision-making. However, few highly significant decisions are fully automated – often, they are used as decision support, for example in detecting child abuse. The law should be expanded to also cover systems where AI is only part of the final decision.
The Science and Technology Select Committee Report ‘Algorithms in Decision-Making’ of May 2018, made extensive recommendations in this respect. It urged the adoption of a legally enforceable ‘right to explanation’ that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results. It also called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.
Last year, the Committee on Standards in Public Life published a review that looked at the implications of AI for the seven Nolan principles of public life, and examined if government policy is up to the task of upholding standards as AI is rolled out across our public services.
The committee’s Chair, Lord Evans, said on publishing the report:
‘Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector…. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.’
The report found that despite the GDPR, the Data Ethics Framework, the OECD principles, and the Guidelines for Using Artificial Intelligence in the Public Sector; the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be. The Committee’s report presented a number of recommendations to mitigate these risks, including
- greater transparency by public bodies in use of algorithms,
- new guidance to ensure algorithmic decision-making abides by equalities law,
- the creation of a single coherent regulatory framework to govern this area,
- the formation of a body to advise existing regulators on relevant issues,
- and proper routes of redress for citizens who feel decisions are unfair.
In the light of the Committee on Standards in Public Life Report, it is high time that a minister was appointed with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector, as was also recommended by the Commons Science and Technology Committee.
We also need to consider whether – as Big Brother Watch has suggested – we should:
- Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
- Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
- Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns.
This chimes with both the Mind the Gap report from the Institute for the Future of Work, which proposed an Accountability for Algorithms Act, and the Ada Lovelace Institute paper, Can Algorithms Ever Make the Grade? Both reports call additionally for a public register of algorithms, such as have been instituted in Amsterdam and Helsinki, and independent external scrutiny to ensure the efficacy and accuracy of algorithmic systems.
Post COVID, private and public institutions will increasingly adopt algorithmic or automated decision making. These will give rise to complaints requiring specialist skills beyond sectoral or data knowledge. The CDEI in its report, Bias in Algorithmic Decision Making, concluded that algorithmic bias means that the overlap between discrimination law, data protection law and sector regulations is becoming increasingly important and existing regulators need to adapt their enforcement to algorithmic decision-making.
This is especially true of both the existing and proposed public sector ombudsman who are – or will be – tasked with dealing with complaints about algorithmic decision-making. They need to be staffed by specialists who can test algorithms’ compliance with ethically aligned design and operating standards and regulation.
There is no doubt that to avoid unethical algorithmic decision making becoming irretrievably embedded in our public services we need to see this approach taken forward, and the other crucial proposals discussed above enshrined in new legislation.
The Constitution Society is committed to the promotion of informed debate and is politically impartial. Any views expressed in this article are the personal views of the author and not those of The Constitution Society.
Categories: AI, Constitutional standards
Digital Technology, Trust, and Social Impact with David Puttnam
What is the role of government policy in protecting society and democracy from threats arising from misinformation? Two leading experts and members of the UK Parliament, House of Lords, help us understand the report Digital Technology and the Resurrection of Trust.
About the House of Lords report on trust, technology, and democracy
Michael Krigsman: We're discussing the impact of technology on society and democracy with two leading members of the House of Lords. Please welcome Lord Tim Clement-Jones and Lord David Puttnam. David, please tell us about your work in the House of Lords and, very briefly, about the report that you've just released.
Lord David Puttnam: Well, the most recent 18 months of my life were spent doing a report on the impact of digital technology on democracy. In a sense, the clue is in the title because my original intention was to call it The Restoration of Trust because a lot of it was about misinformation and disinformation.
The evidence we took, for just under a year, from all over the world made it evident the situation was much, much worse, I think, than any other committee, any of the 12 of us, had understood. I ended up calling it The Resurrection of Trust and I think that, in a sense, the switch in those words tells you how profound we decided that the issue was.
Then, of course, along comes January the 6th in Washington, and a lot of the things that we had alluded to and things that we regarded as kind of inevitable all, in a sense, came about. We're feeling a little bit smug at the moment, but we kind of called it right at the end of June last year.
Michael Krigsman: Our second guest today is Lord Tim Clement-Jones. This is his third time back on the CXOTalk. Tim, welcome back. It's great to see you again.
Lord Tim Clement-Jones: It's great to be back, Michael. As you know, my interest is very heavily in the area of artificial intelligence, but I have this crossover with David. David was not only on my original committee, but artificial intelligence is right at the heart of these digital platforms.
I speak on digital issues in the House of Lords. They are absolutely crucial. The whole area of online harms (to some quite high degree) is driven by the algorithms at the heart of these digital platforms. I'm sure we're going to unpack that later on today.
David and I do work very closely together in trying to make sure we get the right regulatory solutions within the UK context.
Michael Krigsman: Very briefly, Tim, just tell us (for our U.S. audience) about the House of Lords.
Lord Tim Clement-Jones: It is a revising chamber, but it's also a chamber which has the kind of expertise because it contains people who are maybe at the end of their political careers, if you like, with a small p, but have a big expertise, a great interest in a number of areas that they've worked on for years or all their lives, sometimes. We can draw on real experience and understanding of some of these issues.
We call ourselves a revising chamber but, actually, I think we should really call ourselves an expert chamber because we examine legislation, we look at future regulation much more closely than the House of Commons. I think, in many ways, actually, government does treat us as a resource. They certainly treat our reports with considerable respect.
Key issues covered by the House of Lords report
Michael Krigsman: David, tell us about the core issues that your report covered. Tim, please jump in.
Lord David Puttnam: I think Tim, in a sense, set it up quite nicely. We were looking at the potential danger to democracy—of misinformation, disinformation—and the degree to which the duty of care was being exercised by the major platforms (Facebook, Twitter, et cetera) in understanding what their role was in a new 21st Century democracy, both looking at the positive role they could play in terms of information, generating information and checking information, but also the negative in terms of the amplification of disinformation. That's an issue we looked at very carefully.
This is where Tim and my interests absolutely coincide because within those black boxes, within those algorithmic structures is where the problem lies. The problem century-wise—maybe this will spark people a little, I think—is that these are flawed business models. The business model that drives Facebook, Google, and others is in the advertising-related business model. That requires volume. That requires hits and what their incomes generate on the back of hits.
One of the things we tried to unpick, Michael, which was, I think, pretty important, was we took the vision that it's about reach, not about freedom of speech. We felt that a lot of the freedom of speech advocates misunderstood the problem here. Really, the problem was the amplification of misinformation which in turn benefited or was an enormous boost to the revenues of those platforms. That's the problem.
We are convinced through evidence. We're convinced that they could alter their algorithms, that they can actually dial down and solve many, many of the problems that we perceive. But, actually, it's not in their business interest to. They're trapped, in a sense between demands or requirements of their shareholders to optimize that, to optimize share value, and the role and responsibility they have as massive information platforms within a democracy.
Lord Tim Clement-Jones: Of course, governments have been extremely reluctant, in a sense, to come up against big tech in that sense. We've seen that in the competition area over the advertising monopoly that the big platforms have. But I think many of us are now much more sensitive to this whole aspect of data, behavioral data in particular.
I think Shoshana Zuboff did us all a huge benefit by really getting into detail on what she calls exhaust data, in a sense. It may seem trivial to many of us but, actually, the use to which it's put in terms of targeting messages, targeting advertising, and, in a sense, helping drive those algorithms, I think, is absolutely crucial. We're only just beginning to come to grips with that.
Of course, David and I are both, if you like, tech enthusiasts, but you absolutely have to make sure that we have a handle on this and that we're not giving way to unintended consequences.
Impact of social media platforms on society
Michael Krigsman: What is the deep importance of this set of issues that you spend so much time and energy preparing that report?
Lord David Puttnam: If you value, as certainly I do—and I'm sure we all do value—the sort of democracy we were born and brought up in, for me it's rather like carrying a porcelain bowl across a very slippery floor. We should be looking out for it.
I did a TED Talk in 2012 ... [indiscernible, 00:07:19] entitled The Duty of Care where I made the point that we use the concept of duty of care with many, many things: in the medical sense, in the educational sense. Actually, we haven't applied it to democracy.
Democracy, of all the things that we value, may end up looking like the most fragile. Our tolerance, if you like, of the growth of these major platforms, our encouragement of the reach because of the benefits of information, has kind of blindsided us to what was also happening at the same time.
Someone described the platforms as outrage factories. I'm not sure if anyone has come up with a better description. We've actually actively encouraged outrage instead of intelligent debate.
The whole essence of democracy is compromise. What these platforms do not is encourage intelligent debate and reflect the atmosphere of compromise that any democracy requires in order to be successful.
Lord Tim Clement-Jones: The problem is that the culture has been, to date, against us really having a handle on that. I think it's only now, and I think that it's very interesting to see what the Biden Administration is doing, too, particularly in the competition area.
One of the real barriers, I think, is thinking of these things in only individual harm. I think we're now getting to the point where maybe if somebody is affected by hate speech or racial slurs or whatever as individuals, then I think governments are beginning to accept that that kind of individual harm is something that we need to regulate and make sure that the platforms deal with.
I think that the area that David is raising, which is so important and there is still resistance in governments where it's, if you like, societal harms that are being caused by the platforms. Now, this is difficult to define, but the consequences could be severe if we don't get it right.
I think, across the world, you only have to look at Myanmar, for instance, [indiscernible, 00:09:33]. If that wasn't societal harm in terms of use by the military of Facebook, then I don't know what is. But there are others.
David has used the analogy of January the 6th, for instance. There are analogies and there are examples across the world where democracy is at risk because of the way that these platforms operate.
We have to get to grips with that. It may be hard, but we have to get to grips with it.
Michael Krigsman: How do you get to grips with a topic that, by its nature, is relatively vague and unfocused? Unlike individual harms, when you talk about societal harm, you're talking about very diffuse and broad impacts.
Lord David Puttnam: Michael, I sit on the Labor benches at the House of Lords and probably, unsurprising, I'm a Louey Grandise [phonetic, 00:10:27] fan, so I think the most interesting thing that's taking place at the moment is people who look back to the early part of the 20th Century and the railroads, the breaking up of the railroads, and understanding why that had to happen.
It wasn't just about the railroads. It was about the railroads' ability to block and distort all sorts of other markets. The obvious one was the coal market, but others. Then indeed it blocked and made extraordinary advances on the nature of shipping.
What I think legislators have woken up to is, this isn't just about platforms. This is actually about the way we operate as a society. The influence of these platforms is colossal, but most important of all, the fact that what we have allowed to develop is a business model which acts inexorably against our society's best interest.
That is, it inflames fringe views. It inflames misinformation. Actually, not only inflames it. It then profits from that inflammation. That can't be right.
Lord Tim Clement-Jones: Of course, it is really quite opaque because, if you look at this, the consumer is getting a free ride, aren't they? Because of the advertising, it's being redirected back to them. But it's their data which is part of the whole business model, as David has described.
It's very difficult sometimes for regulators to say, "Ah, this kind of consumer detriment," or whatever it may be. That's why you also need to look at the societal aspects of this.
If you purely look (in conventional terms) at consumer harm, then you'd actually probably miss the issues altogether because—with things like advertising monopoly, use of data without consent, and so on, and misinformation and disinformation—it is quite difficult (without looking at the bigger societal picture) just to pin it down and say, "Ah, well, there's a consumer detriment. We must intervene on competition grounds." That's why, in a sense, we're all now beginning to rewrite the rules so that we do catch these harms.
Balancing social media platforms rights against the “duty of care”
Michael Krigsman: We have a very interesting point from Simone Jo Moore on LinkedIn who is asking, "How do you strike this balance between intelligent questioning and debate versus trolling on social media? How should lawmakers and policymakers deal with this kind of issue?
Lord David Puttnam: We came up with, we identified an interesting area, if you like, of comprise – for want of a better word. As I say, we looked hard at the impact on reach.
Now, Facebook, if you were a reasonably popular person on Facebook, you can quite quickly have 5,000 people follow what you're saying. At that point, you get a tick.
It's clear to us that the algorithm is able to identify you as a super-spreader at that point. What we're saying is, at that moment not only have you got your tick but you then have to validate and verify what it is you're saying.
That state of outrage, if you like, is what blocks the 5,000 and then has to be explained and justified. That seemed to us an interesting area to begin to explore. Is 5,000 the right number? I don't know.
But what was evident to us is the things that Tim really understands extremely well. These algorithmic systems inside that black box can be adjusted to ensure that, at a certain moment, validation takes place. Of course, we saw it happen in your own election that, in the end, warnings were put up.
Now, you have to ask yourself, why wasn't that done much, much, much sooner? Why? Because we only reasonably recently became aware of the depth of the problem.
In a sense, the whole Russian debacle in the U.S. in the 2016 election kind of got us off on the wrong track. We were looking at the wrong place. It wasn't what Russia had done. It was what Russia was able to take advantage of. That should have been the issue and it us a long time to get there.
Lord Tim Clement-Jones: That's why, in a sense, you need new ways of thinking about this. It's the virality of the message, exactly as David has talked about, the super-spreader.
I like the expression used by Avaaz in their report that came out last year looking at, if you like, the anti-vaxx messages and the disinformation over the Internet during the COVID pandemic. They talked about detoxing the algorithm. I think that's really important.
In a sense, I don't think it's possible to lay down absolutely hard and fast rules. That's the benefit of the duty of care that it is a blanket legal concept, which has a code of practice, which is effectively enforced by a regulator. It means that it's up to the platform to get it right in the first place.
Then, of course – David's report talked about it – you need forms of redress. You need a kind of ombudsman, or whatever may be the case, independent of the platforms who can say, "They got it wrong. They allowed these messages to impact on you," and so on and so forth. There are mechanisms that can be adopted, but at the heart of it, as David said, is this black box algorithm that we really need to get to grips with.
Michael Krigsman: You've both used terms that are very interestingly put together, it seems to me. One, Tim, you were just talking about duty of care. David, you've raised (several times) this notion of flawed business models. How do these two, duty of care and the business model, intersect? It seems like they're kind of diametrically opposed.
Lord David Puttnam: It depends on your concept of what society might be, Michael. The type of society I spent my life arguing for, they're not opposed at all, the role of the peace, because that society would have a combination of regulation, but also personal responsibility on the part of the people who run businesses.
One of the things that I think Tim and I are going to be arguing for, which we might have problems in the UK, is the notion of personal responsibility. At what point do the people who sit on the board at Facebook have a personal responsibility for the degree to which they exercise duty of care over the malfunction of their algorithmic systems?
Lord Tim Clement-Jones: I don't see a conflict either, Michael. I think that you may see different regulators involved. You may see, for instance, a regulator imposing a way of working over content, user-generated content on a platform. You may see another regulator (more specialist, for instance) on competition. I think it is going to be horses for courses, but I think that's the important thing to make sure that they cooperate.
I just wanted to say that I do think that often people in this context raised the question of freedom of expression. I suspect that people will come on the chat and want to raise that issue. But again, I don't see a conflict in this area because we're not talking about ordinary discourse. We're talking about extreme messages: anti-vaxxing, incitement of violence, and so on and so forth.
The one thing David and I absolutely don't want to do is to impede freedom of expression. But that's sometimes used certainly by the platforms as a way of resisting regulation, and we have to avoid that.
How to handle the cross-border issues with technology governance?
Michael Krigsman: We have another question coming now from Twitter from Arsalan Khan who raises another dimension. He's talking about if individual countries create their own policies on societal harm, how do you handle the cross-border issues? It seems like that's another really tricky one here.
Lord David Puttnam: I think what is happening, and this is quite determined, I think, on the part of the Biden Administration—the UK and, actually, Europe, the EU, is probably further advanced than anybody else on this—is to align our regulatory frameworks. I think that will happen.
Now, in a sense, these are big marketplaces. The Australian situation with Facebook has stimulated this. Once you get these major markets aligned, it's extremely hard to see how Facebook, Google, and the rest of them could continue with their advertising with their current model. They would have to adjust to what those marketplaces require.
Bear in mind, what troubles me a lot, Michael, is that, if you think back, Mr. Putin and President Xi must be laughing their heads off at the mess we got ourselves into because they've got their own solution to this problem – a lovely, simple solution.
We've got our knickers in a twist in an extraordinary situation quite unintended in most states. The obligation is on the great Western democracies to align the regulatory frameworks and work together. This can't be done on a country-by-country basis.
Lord Tim Clement-Jones: Once the platforms see the writing on the wall, in a sense, Michael, I think they will want to encourage people to do that. As you know, I've been heavily involved in the AI ethics agenda. That is coming together on an international basis. This, if anything, is more immediate and the pressures are much greater. I think it's bound to come together.
It's interesting that we've already had a lot of interest in the duty of care from other countries. The UK, in a sense, is a bit of a frontrunner in this despite the fact that David and I are both rather impatient. We feel that it hasn't moved fast enough.
Nevertheless, even so, by international standards, we are a little bit ahead of the game. There is a lot of interest. I think, once we go forward and we start defining and putting in regulation, that's going to be quite a useful template for people to be able to legislate.
Lord David Puttnam: Michael, it's worth mentioning that it's interesting how things bubble up and then become accepted. When the notion of fines of up to 10% of turnover was first mooted, people said, "What?! What?!"
Now, that's regarded as kind of a standard around which people begin to gather, so there is momentum. Tim is absolutely right. There is momentum here. The momentum is pretty fierce.
Ten percent of turnover is a big fine. If you're sitting on a board, you've got to think several times before you sign up on that. That's not just the cost of doing business.
Michael Krigsman: Is the core issue then the self-interest of platforms versus the public good?
Lord David Puttnam: Yes, essentially it is. Understand, if you look back and look at the big anti-trust decisions that were made in the first decade of the 20th Century. I think we're at a similar moment and, incidentally, I think that it is as certain that these things will be resolved within the next ten years in a very similar manner.
I think it's going to be up to the platforms. Do they want to be broken up? Do they want to be fined? Or do they want to get rejoined in society?
Lord Tim Clement-Jones: Yeah, I mean I could get on and really bore everybody with the different forms of remedies available to our competition regulators. But David talked about big oil, which was broken up by what are called structural remedies.
Now, it may well be that, in the future, regulators—because of the power of the tech platforms—are going to have to think about exactly doing that, say, separating Facebook from YouTube or from Instagram, or things of that sort.
We're not out of the era of "move fast and break things." We now are expecting a level of corporate responsibility from these platforms because of the power they wield. I think we have to think quite big in terms of how we're going to regulate.
Should governments regulate social media?
Michael Krigsman: We have another comment from Twitter, again from Arsalan Khan. He's talking about, do we need a new world order that requires technology platforms to be built in? It seems like as long as you've got this private sector set of incentives versus the public good, then you're going to be at loggerheads. In a practical way, what are the solutions, the remedies, as you were just starting to describe?
Lord Tim Clement-Jones: What are governments for? Arsalan always asks the most wonderful questions, by the way, as he did last time.
What are governments for? That is what the role of government is. It is, in a sense, a brokerage. It's got to understand what is for the benefit of, if you like, society as a whole and, on the other hand, what are the freedoms that absolutely need preserving and guaranteeing and so on.
I would say that we have some really difficult decisions to make in this area. But David and I come from the point of view of actually creating more freedom because the impact of the platforms (in many, many ways) will be to reduce our freedoms if we don't do something about it.
Lord David Puttnam: It's very, very much, and that's why I would argue, Michael, that the Facebook reaction or response in Australia was so incredibly clumsy because what it did is it begged a question we could really have done without, which is, are they more powerful than the sovereign nations?
Now, you can't go there because you get the G7 together or the G20 together, you know, you're not going to get into a situation where any prime minister is going to concede that, actually, "I'm afraid there's nothing we can do about these guys. They're bigger than us. We're just going to have to live with it." That's not going to happen.
Lord Tim Clement-Jones: The only problem there was the subtext. The legislation was prompted by one of the biggest media organizations in the world. In a sense, I felt pretty uncomfortable taking sides there.
Lord David Puttnam: I think it was just an encouragement to create a new series of an already long-running TV series.
Lord Tim Clement-Jones: [Laughter]
Lord David Puttnam: You're absolutely right about that. I had to put that down as an extraordinary irony of history. The truth is you don't take on nations, and many have.
Some of your companies have and genuinely believe that they were bigger. But I would say don't go there. Frankly, if I were a shareholder in Facebook – I'm not – I'd have been very, very, very cross with whoever made that decision. It was stupid.
Michael Krigsman: Where is all of this going?
Lord Tim Clement-Jones: We're still heavily engaged in trying to get the legislation right in the UK. But David and I believe that our role is to kind of keep government honest and on track and, actually, go further than they've pledged because this question of individual harm, remedies for that, and a duty of care in relation to individual harm isn't enough. It's got to go broader into societal harm.
We've got a road to travel. We've got draft legislation coming in very, very soon this spring. We've got then legislation later on in the year, but actually getting it right is going to require a huge amount of concentration.
Also, we're going to have to fight off objections on the basis of freedom of expression and so on and so forth. We are going to have to reroute our determination in principle, basically. I think there's a great deal of support out there, particularly in terms of protection of young people and things of that sort that we're actually determined to see happen.
Political messages and digital literacy
Michael Krigsman: Is there the political will, do you think, to follow through with these kinds of changes you're describing?
Lord David Puttnam: In the interest of a vibrant democracy, when any prime minister or president of any country looks at the options, I think they're facing many alternatives. I can't really imagine Macron, Johnson, or anybody else looking at the options available to them.
They may find those options quite uncomfortable, and the ability in some of these platforms to embarrass politicians is considerable. But when they actually look at the options, I'm not sure they're faced with that many alternatives other than pressing down the vote that Tim just laid out for you.
Lord Tim Clement-Jones: I think the real Achilles heel, though, that David's report pointed out really clearly, and the government failed to answer satisfactorily, was the whole question of electoral regulation, basically. The use of misleading political messaging during elections, the impact of, if you like, opaque political messaging where it's not obvious where it's coming from, those sorts of things.
I think the determination of governments, especially because they are in control and they are benefiting from some of that messaging, there's a great reluctance to take on the platforms in those circumstances. Most platforms are pretty reluctant to take down any form of political advertising or messaging or, in a sense, moderate political content.
That for me is the bit that I think is going to be the agenda that we'll probably be fighting on for the next ten years.
Lord David Puttnam: Michael, it's quite interesting that both of the major parties – not Tim's party, as you behave very well – both of the major parties actually misled us. I wouldn't say lied to us, but they misled us in the evidence they gave about their use of the digital environment during an election, which was really lamentable. We called them out, but the fact that, in both places, they felt that they needed to, as necessary, break the law to give themselves an edge is a very worrying indicator of what we might be up against here.
Lord Tim Clement-Jones: The trouble is, political parties love data because targeted messages, microtargeting as it's called, is very powerful, potentially, and gaining support. It's like a drug. It's very difficult to wean politicians off what they see as a new, exciting tool to gain support.
Michael Krigsman: I work with various software companies, major software companies. Personalization based on data is such a major focus of technology, of every aspect of technology with tentacles to invade our lives. When done well, it's intuitive and it's helpful. But you're talking about the often indistinguishable case where it's done invasively and insinuating itself into the pattern of our lives. How do you even start to grapple with that?
Lord Tim Clement-Jones: It kind of bubbled up in the Cambridge Analytica case where the guy who ran the company was stupid enough to boast about what they were able to do. What it illustrated is that that was the tip of a very, very worrying nightmare for all of us.
No, I mean this is where you come back to individual responsibility. The idea that the people, the management of Facebook, the management of Google are not appalled by that possibility and aren't doing everything they can to prevent is, I think it's what gives everyone at Twitter nightmares.
I don't think they ever intended or wanted to have the power they have in these fringe areas, but they're stuck with them. The answer is, how do we work with governments to make sure they're minimized?
Lord Tim Clement-Jones: This, Michael, brings in one of David and my favorite subjects, which is digital literacy. I'm an avid reader of people who try and buck the trend. I love Jaron Lanier's book Ten Reasons for Deleting your Facebook Account [sic]. I love the book by Carissa Veliz called Privacy is Power.
Basically, that kind of understanding of what you are doing when you sign up to a platform—when you give your data away, when you don't look at the terms and conditions, you tick the boxes, you accept all cookies, all these sorts of things—it's really important that people understand the consequences of that. I think it's only a tiny minority who have this kind of idea they might possibly live off-grid. None of us can really do that, so we have to make sure that when we live with it, we are not giving away our data in those circumstances.
I don't practice what I preach half the time. We're all in a hurry. We all want to have a look at what's on that website. We hit the accept all cookies button or whatever it may be, and we go through. We've got to be more considerate about how we do these things.
Lord David Puttnam: Chapter 7 of our report is all about digital literacy. We went into it in great depth. Again, fairly lamentable failure by most Western democracies to address this.
There are exceptions. Estonia is a terrific exception. Finland is one of the exceptions. They're exceptions because they understand the danger.
Estonia sits right on the edge with its vast neighbor Russia with 20% of its population being Russian. It can't afford misinformation. Misinformation for them is catastrophe. Necessarily, they make sure their young people are really educated in the way in which they receive information, how they check facts.
We are very complacent in the West; I've got to say. I'll say this about the United States. We're unbelievably complacent in those areas and we're going to have to get smart. We've got to make sure that young people get extremely smart about the way they're fed and react and respond to information.
Lord Tim Clement-Jones: Absolutely. Our politics, right across the West, demonstrate that there's an awful lot of misinformation, which is believed – believed as the gospel, effectively.
Balancing freedom of speech on social media and cyberwarfare
Michael Krigsman: We have another question from Twitter. How do you balance social media reach versus genuine freedom of speech?
Lord David Puttnam: I thought I answered it. Obviously, I didn't. It's that you accept the fact that freedom of speech requires that people can say what they want. This goes back to the black boxes. At a certain moment, the box intervenes and says, "Whoa. Just a minute. There is no truth in what you're saying, " or worse on the case of anti-vaxxers. "There is actual harm and damage in what you're saying. We're not going to give you reach."
What you do is you limit reach until the person making those statements can validate them or affirm them or find some other way of, as it were, being allowed to amplify. It's all about amplification. It's trying to stop the amplification of distortion and lies and really quite dangerous stuff like the anti-vaxx.
We've got a perfect trial run, really, with anti-vaxxing. If we can't get this right, we can't get much right.
Lord Tim Clement-Jones: There are so many ways. When people say, "Oh, how do we do this?" you've got sites like Reddit who have a community, different communities. You have rules applying to the communities that have to conform to a particular standard.
Then you've got the Avaaz not only detoxing the algorithm, but the duty of correction. Then you've got great organizations like NewsGuard who basically, in a sense, have a sort of star system, basically, to verify some of the accuracy of news outlets. We do have the tools, but we just have to be a bit determined about how we use them.
Michael Krigsman: We have another question from Twitter that I think addresses or asks about this point, which is, how can governments set effective constraints when partisan politics benefits from misusing digital technologies and even spreading misinformation?
Lord David Puttnam: Tim laid it out for you early on why the House of Lords existed. This is where it actually gets quite interesting.
We, both Tim and I, during our careers—and we both go back, I think, 25 years—had managed to get amendments into legislation against the head. That's to say that didn't suit either the government of the day or even the lead opposition of the day. The independence of the House of Lords is wonderfully, wonderfully valuable. It is expert and it does listen.
Just a tiny example, if someone said to me or David, "Why were you not surprised that your report didn't get more traction?" it's 77,000 words long. Yeah, it's 77,000 words long because it's a bloody complicated subject. We had the time and the luxury to do it properly.
I don't think that will necessarily prove to be a stumbling block. We have enough ... [indiscernible, 00:37:01] embarrassment. The quality of the House of Lords and the ability to generate public opinion, if you like, around good, sane, sensible solutions still do function within a democracy.
But if you go down the road that Tim was just saying, if you allow the platforms to go in the route they appear to have taken, we'll be dealing with autocracy, not democracy. Then you're going to have a set of problems.
Lord Tim Clement-Jones: David is so right. The power of persuasion still survives in the House of Lords. Because the government doesn't have a majority, we can get things done if that power of persuasion is effective. We've done that quite a few times over the last 25 years, as David says.
Ministers know that. They know that if you espouse a particular cause that is clearly sensible, they're going to find that they're pretty sticky wicked or whatever the appropriate baseball analogy would be, Michael, in those circumstances. We have had some notable successes in that respect.
For instance, only a few years ago, we had a new code for age-appropriate design, which means that webpages now need to take account of the age of the individuals actually accessing them. It's now called a Children's Code. It came into effect last year and it's a major addition to our regulation. It was quite heavily resisted by the platforms and others when it came in, but by a single colleague of David and mine (supported by us) she drove it through, greatly to her credit.
Michael Krigsman: We have two questions now, one on LinkedIn and one on Twitter, that relates to the same topic. That is the speed of government, the speed of change and government's ability to keep up. On Twitter, for example, future wars are going to be cyber, and the government is just catching up. The technology is changing so rapidly that it's very difficult for the legal system to track that. How do we manage that aspect?
Lord Tim Clement-Jones: Funny enough, government think that. Their first thought is about cybersecurity. Their first thought is about their cyber, basically, their data.
We've got a new, brand new, national cybersecurity center about a year or two old now. The truth is, particularly in view of Russian activities, we now have quite good cyber controls. I'm not sure that our risk management is fantastic but, operationally, we are pretty good at this.
For instance, things like the solar winds hack of last year have been looked at pretty carefully. We don't know what the outcome is, but it's been looked at pretty carefully by our national cybersecurity center.
Strangely enough, the criticism I have with government is, if only they thought of our data in the way that they thought about their data, we'd all be in a much happier place, quite honestly.
Lord David Puttnam: I think that's true. Michael, I don't know whether this is absolutely true in the U.S. because it's such a vast country, but my experience of legislation is it can be moved very quickly when there's an incident. Now, I'll give you an example.
I was at the Department of Education at the moment where a baby was allowed to die under very unfortunate, catastrophic failure by different systems of the government. The entire department ground to a halt for about two months while this was looked at and whilst the government, whilst the department tried to explain itself and any amount of legislation was brought forward. Governments deal in crises, and this is going to be a series of crises.
The other thing governments don't like is judicial review. I think we're looking at an area here where judicial review—either by the platforms for a government decision or by civil society because of a government decision—is utterly inevitable. I actually think, longer-term, these big issues are going to be decided in the courts.
Advice for policymakers and business people
Michael Krigsman: As we finish up, can I ask you each for advice to several different groups? First is the advice that you have for governments and for policymakers.
Lord Tim Clement-Jones: Look seriously at societal harms. I think the duty of care is not enough just simply to protect individual citizens. It is all about looking at the wider picture because if you don't, then you're going to find it's too late and your own democracy is going to suffer.
I think you're right, Michael, in a sense that some politicians appear to have a conflict of interest on this. If you're in control, you don't think of what it's like to have the opposition or to be in opposition. Nevertheless, that's what they have to think about.
Lord David Puttnam: I was very impressed, indeed, tuning in to some of the judicial subcommittees at the congressional hearings on the platforms. I thought that the chairman ... [indiscernible, 00:42:35] did extremely well.
There is a lot of expertise. You've got more expertise, actually, Michael, in your country than we have in ours. Listen to the experts, understand the ramifications, and, for God's sake, politicians, it's in their interests, all their interests, irrespective of Republicans or Democrats, to get this right because getting it wrong means you are inviting the possibility of a form of government that very, very, very few people in the United States wish to even contemplate.
Michael Krigsman: What about advice to businesspeople, to the platform owners, for example?
Lord David Puttnam: Well, we had an interesting spate, didn't we, where a lot of advertisers started to take issue with Facebook, and that kind of faded away. But I would have thought that, again, it's a question of regulatory oversight and businesses understanding.
How many businesses in the U.S. want to see democracy crumble? I mean I was quite interested immediately after the January 6th thing for where the businesses walked away from, not so much the Republican party, but away from Trump.
I just think we've got to begin to hold up a mirror to ourselves and also look carefully at what the ramifications of getting it wrong are. I don't think there's a single business in the U.S. (or if there are, there are very, very few) who wish to go down that road. They're going to realize that that means they've got to act, not just react.
Lord Tim Clement-Jones: I think this is a board issue. This is the really important factor.
Looking on the other side, not the platform side because I think they are only too well aware of what they need to do, but if I'm on the other side and I'm, if you like, somebody who is using social media, as a board member, you have to understand the technology and you have to take the time to do that.
The advertising industry—really interesting, as David said—they're developing all kinds of new technology solutions like blockchain to actually track where their advertising messages are going. If they're directed in the wrong way, they find out and there's an accountability down the blockchain which is really smart in the true sense of the word.
It's using technology to understand technology, which I think you can't leave it to the chief information officer or the chief technology officer. You as the CEO or the chair, you have to understand it.
Lord David Puttnam: Tim is 100% right. I've sat in a lot of boards in my life. If you really want to grab a board's attention – I'm not saying which part of the body you're going to grab – start looking at the register and then have a conversation about how adequate directors' insurance is. It's a very lively discussion.
Lord Tim Clement-Jones: [Laughter]
Lord David Puttnam: I think this whole issue of personal responsibility, the things that insurance companies will and won't take on in terms of protecting companies and boards, that's where a lot of this could land and very interestingly.
Importance of digital education
Michael Krigsman: Let's finish up by any thoughts on the role of education and advice that you may have for educators in helping prepare our citizens to deal with these issues.
Lord Tim Clement-Jones: Funny enough, I've just developed (with a group of people) a framework for ethical AI for use in education. We're going to be launching that in March.
The equivalent is needed in many ways because of course digital literacy, digital education is incredibly important. Actually, parents and teachers, this isn't just a generation, a younger generational issue. It needs to go all the way through. I think we need to actually be much more proactive about the tools that are out there for parents and others, even main board directors.
You cannot spend enough time talking about the issues. That's why, when David mentioned Cambridge Analytica, suddenly everybody gets interested. But it's a very rare example of suddenly people becoming sensitized to an issue that they previously didn't really think about.
Lord David Puttnam: It's a parallel, really, in the sense of climate change. These are our issues. If we're going to prepare our kids – I've got three grandchildren – if we're going to prepare them properly for the remains of their lives, we have an absolute obligation to explain to them what the challenges their lives will far are, what forms of society they're going to have to rally around, what sort of governance they should reasonably expect, and how they'll participate in all of that.
If they're left in ignorance—be it on climate change or, frankly, on all the issues we've been discussing this evening—we are making them incredibly vulnerable to a form of challenge and a form of life that we've lived very privileged lives. I think that the lives of our grandchildren, unless we get this right for them and help them, will be very diminished.
I use that word a lot recently. They will live diminished lives and they'll blame us, and they'll wonder why it happened.
Michael Krigsman: Certainly, one of the key themes that I've picked up from both of you during this conversation has been this idea of responsibility, individual responsibility for the public welfare.
Lord David Puttnam: Unquestionable. It's summed up in the various duty of care. We have an absolutely overwhelming duty of care for future generations, and it applies as much to the digital environment as it does to climate.
Lord Tim Clement-Jones: Absolutely. In a way, what we're now having to overturn is this whole idea that online was somehow completely different to offline, to the physical world. Well, some of us have been living in the online remote world for the whole of last year, but why should standards be different in that online world? They shouldn't be. We should expect the same standards of behavior and we should expect people to be accountable for that in the same way as they are in the offline world.
Michael Krigsman: Okay. Well, what a very interesting conversation. I would like to express my deep thank you to Lord Tim Clement-Jones and Lord David Puttnam for joining us today.
David, before we go, I just have to ask you. Behind you and around you are a bunch of photographs and awards that seem distant from your role in the House of Lords. Would you tell us a little bit more about your background very quickly?
Lord David Puttnam: Yes. I was a filmmaker for many years. That's an Emmy sitting behind me. The reason the Emmy is sitting there is the shelf isn't deep enough to take it. But I got my Oscar up there. I've got four or five Golden Globes and three or four BAFTAs, David di Donatello, and Palme d'Or from Cannes. I had a very, very happy, wonderfully happy 30 years in the movie industry, and I've had a wonderful 25 years working with Tim in the legislature, so I'm a lucky guy, really.
https://www.cxotalk.com/episode/digital-technology-trust-social-impact
House of Lords Member talks AI Ethics, Social Impact, and Governance
CXO Talk Jan 2021
What are the social, political, and government policy aspects of artificial intelligence? To learn more, we speak with Lord Tim Clement-Jones, Chairman of the House of Lords Select Committee on AI and advisor to the Council of Europe AI Committee.
What are the unique characteristics of artificial intelligence?
Michael Krigsman: Today, we're speaking about AI, public policy, and social impact with Lord Tim Clement-Jones, CBE. What are the attributes or characteristics of artificial intelligence that make it so important from a policy-making perspective?
Lord Tim Clement-Jones: I think the really key thing is (and I always say) AI has to be our servant, not our master. I think the reason that that is such an important concept is because AI potentially has an autonomy about it.
Brad Smith calls AI software that learns from experience. Well, of course, if software learns from experience, it's effectively making things up as it goes along. It depends, obviously, on the original training data and so on, but it does mean that it can do things of its own not quite volition but certainly of its own motion, which therefore have implications for us all.
Where you place those AI applications, algorithms (call them what you like) is absolutely crucial because if they're black boxes, humans don't know what is happening, and they're placed in financial services, government decisions over sentencing, or a variety of really sensitive areas then, of course, we're all going to be poorer for it. Society will not benefit from that if we just have this range of autonomous black box solutions. In a sense, that's slightly a rather dystopian way of describing it, but it's certainly what we're trying to avoid.
Michael Krigsman: How is this different from existing technologies, data, and analytics that companies use every day to make decisions and consumers don't have access to the logic and the data (in many cases) as well?
Lord Tim Clement-Jones: Well, of course, it may not be if those data analytics are carried out by artificial intelligence applications. There are algorithms that, in a sense, operate on data and come up with their own conclusions without human intervention. They have exactly the same characteristic.
The issue for me is this autonomy aspect, data analytics. If you've got actual humans in the loop, so to speak, then that's fine. We, as you know, have slightly tighter, well, considerably tighter, data protection in Europe (as a framework) for decision-making when you're using data. The aspect of consent or using sensitive data, a lot of that is covered. One has a kind of reassurance about that that there is, if you like, a regulatory framework.
But when it comes to automaticity, it is much more difficult because, at the moment, you don't necessarily have duties relating to the explainability of algorithms or the freedom from bias of algorithms, for instance, in terms of the data that's input or the decisions that are made. You don't necessarily have an overarching rule that says AI must be developed for human benefit and not, if you like, for human detriment.
There are a number of kinds of areas which are not covered by regulation. Yet, there are high-risk areas that we really need to think about.
Algorithmic decision-making and risks
Michael Krigsman: You focus very heavily on this notion of algorithmic decision-making. Please elaborate on that, what you mean by that, and also the concerns that you have.
Lord Tim Clement-Jones: Well, it's really interesting because, actually, quite a lot of the examples that one is trying to avoid come from the States. For instance, parole decisions or decisions in terms of artificial intelligence, that live facial recognition technology using artificial intelligence.
Sometimes, you get biased decision-making of a discriminatory nature in racial terms. That was certainly true in Florida with the COMPAS parole system. It's one of the reasons why places like Oakland, Portland, and San Francisco have banned live facial recognition technology in their cities.
Those are the kinds of aspects which you really do need to have a very clear idea of how you design these AI applications, what data you're putting in, how that data trains the algorithm, and then what the output is at the end of the day. It's trying to get some really clear framework for this.
You can call it an ethical framework. Many people do. I call it just, in a sense, a set of principles that you should basically put into place for, if you like, the overall governance or the design and for the use cases that you're going to use for the AI application.
Michael Krigsman: What is the nature of the framework that you use, and what are the challenges associated with developing that kind of framework?
Lord Tim Clement-Jones: I think one of the most important aspects is that this needs to be cross-country. This needs to be international. My desire, at the end of the day, is to have a framework which, in a sense, assesses the risk.
I am not a great regulator. I don't really believe that you've got to regulate the hell out of AI. You've got to basically be quite forensic about this.
You've got to say to yourself, "What are the high-risk areas that are in operation?" It could be things like live facial recognition. It could be financial services. It could be certain quite specific areas where there are high risks of infringement of privacy or decisions being made in a biased way, which have a huge impact on you as an individual or, indeed, on society because social media algorithms are certainly not free of issues to do with disinformation and misinformation.
Basically, it starts with an assessment of what the overall risk is, and then, depending on that level of risk, you say to yourself, "Okay, a voluntary code. Fine for certain things in terms of ethical principles applied."
But if the risk is a bit high, you say to yourself, "Well, actually, we need to be a bit more prescriptive." We need to say to companies and corporations, "Look, guys. You need to be much clearer about the standards you use." There are some very good international standard bodies, so you prescribe the kinds of standards, the design, an assessment of use case, audit, impact assessments, and so on.
There are certain other things where you say, "I'm sorry, but the risk of detriment, if you like, or damage to civil liberties," or whatever it may be, "is so high that, actually, what we have to have is regulation."
You say to yourself, then you have a framework. You say to yourself you can only use, for instance, live facial recognition in this context, and you must design your application in this particular way.
I'm a great believer in a graduation, if you like, of regulation depending on the risk. To me, it seems that we're moving towards that internationally. I actually believe that the new administration in the States will move forward in that kind of way as well. It's the way of the world. Otherwise, we don't gain public trust.
Trust and confidence in AI policy
Michael Krigsman: The issue of trust is very important here. Would you elaborate on that for us?
Lord Tim Clement-Jones: There are cultural issues here. One of the examples that we used in our original House of Lords report was GM foods. There's a big gulf, as you know, between the approach to GM foods in the States and in Europe.
In Europe, we sort of overreacted and said, "Oh, no, no, no, no, no. We don't like this new technology. We're not going to have it," and so on and so forth. Well, it was handled extremely badly because it looked as though it was just a major U.S. corporation that wanted to have its monopoly over seed production and it wasn't even possible for farmers to grow seed from seed and so on.
In a sense, all the messages were got wrong. There was no overarching ethical approach to the use of GM foods, and so on. We're determined not to get that wrong this time.
The reason why GM foods didn't take off in Europe was because, basically, the public didn't have any trust. They believed, if you like, an awful lot of (frankly) the myths that were surrounding GM foods.
It wasn't all myth. They weren't convinced of the benefit. Nobody really explained the societal benefits of GM foods.
Whether it would have been different, I don't know. Whether those benefits would have been seen to outweigh some of the dangers that people foresaw, I don't know. Certainly, we did not want this kind of approach to take place with artificial intelligence.
Of course, artificial intelligence is a much broader technology. A lot of people say, "Oh, you shouldn't talk about artificial intelligence. Talk about machine learning or probabilistic learning," or whatever it may be. But AI is a very useful, overall description in my view.
Michael Krigsman: How do you balance the competing interests, for example, the genetically modified food example you were just speaking about, the interest of consumers, the interest of seed producers, and so forth?
Lord Tim Clement-Jones: I think it's really interesting because I think you have to start with the data. You could have a set of principles. You could say that app developers need to look at the public benefit and so on and so forth. But the real acid test is the data that you're going to use to train the AI, the algorithm, whatever you may describe it as.
That's the point where there is this really difficult issue about what data is legitimate to extract from individuals. What data should be publicly valued and not sold by individual companies or the state (or whatever)? It is a really difficult issue.
In the States, you've had that brilliantly written book Surveillance Capitalism by Shoshana Zuboff. Now those raise some really important issues. Should an individual's behavioral data—not just ordinary personal data, but their behavioral data—be extractable and usable and treated as part of a data set?
That's why there is so much more discussion now about, well, what value do we attribute to personal data? How do we curate personal data sets? Can we find a way of not exactly owning but, certainly, controlling (to a greater extent) the data that we impart, and is there some way that we can extract more value from that in societal terms?
I do think we have to look a bit more. Certainly, in the UK, we've been very keen on what we call data trust or social data foundations, but institutions that hold data, public data; for instance, our national health service. Obviously, you have a different health service in the States, but data held by a national health service could be held in a data trust and, therefore, people would see what the framework for governance was. This would be actually very reassuring in many ways for people to see that their data was simply going to be used back in the health service or if it was exploited by third parties, that that was again for the benefit of the national health service: vaccinations, diagnosis of rare diseases, or whatever it may be.
It's really seeing the value of that data and not just seeing it as a commercial commodity that is taken away by a social media platform, for instance, and exploited without any real accountability. Arguing that terms and conditions do the job doesn't ever – I'm a lawyer, but I still don't believe that terms and conditions are adequate in those circumstances.
Decision-making about AI policy and governance
Michael Krigsman: We have a very interesting question from Arsalan Khan, who is a regular listener and contributor to CXOTalk. Thank you, Arsalan, always, for all of your great questions. His question is very insightful, and I think also relates to the business people who watch this show. He says, "How do you bring together the expertise (both in policymaking as well as in technology) so that you can make the right decisions as you're evaluating this set of options, choices, and so forth that you've been talking about?"
Lord Tim Clement-Jones: Well, there's no substitute for government coordination, it seems to me. The White House under President Obama had somebody who really coordinated quite a lot of this aspect.
There was, there has been, in the Trump White House, an AI specialist as well. I don't think they were quite given the license to get out there and sort of coordinate the effort that was taking place, but I'm sure, under the new administration, there will be somebody specifically, in a sense, charged with creating policy on AI in all its forms.
The States belongs to the Global Partnership on AI with Canada, France, UK, and so on. And so, I think there is a general recognition that governments have a duty to pull all this together.
Of course, it's a big web. You've got all those academic institutions, powerful academic institutions, who are not only researching into AI but also delivering solutions in terms of ethics, risk assessments, and so on. Then you've got all the international institutions: OECD, Council of Europe, G20.
Then at the national level, in the UK for instance, we've got regulators of data. We have an advisory body that advises on AI, data, and innovation. We have an office for AI in government.
We have The Alan Turing Institute, which pulls together a lot of the research that is being done in our universities. Now, unless somebody is sitting there at the center and saying, "How do we pull all this together?" it becomes extremely incoherent.
We've just had a paper from our competition authority on algorithms and the way that they may create consumer detriment in certain circumstances where they're misleading. For instance, on price comparison or whatever it may be.
Now, that is very welcome. But unless we actually boat that all into what we're trying to do across government and internationally, we're going to find ourselves with a set of rules and another set of rules there. Actually, trading across borders is difficult enough as it is, and we've got all the data shield and data adequacy issues at this very moment. Well, if we start having issues about inspection of the guts of an algorithm before an export can take place—because we're not sure that it's conforming to our particular set of rules in our country—then I think that's going to be quite tricky.
I'm a big fan of elevating this and making sure that, right across the board, we've got a common approach. That's why I'm such a big fan of this risk-based approach because I think it's common sense, basically, and it doesn't have one size fits all. I think, also, it means that, culturally, I think we can all get together on that.
Michael Krigsman: Is there a risk of not capturing the nuances because this is so complex and, therefore, creating regulation or even policy frameworks that are just too broad-brushed?
Lord Tim Clement-Jones: There is a danger of that but, frankly, I think, at the end of the day, whatever you say about this, there are going to be tools. I think regulation is going to happen at a sector level, probably.
I think that it's fair enough to be relatively broad-brushed across the board in terms of risk assessment and the general principles to be adopted in terms of design and so on. You've got people like the IEEE who are doing ethically aligned design standards and so on.
It's when it gets down to the sector level that I think then you get more specific. I don't think most of us would have too much objection to that. After all, alignment by sector.
For instance, the rules relating to financial services in the States (for instance in mergers, takeovers, and such) aren't very different to those in the UK, but there is a sort of competitive drive towards aligning your regulation and your regulatory rules, so to speak. I'd be quite optimistic that, actually, if we saw that (or if you saw that) there was one type of regulation in a particular sector, you'd go for it.
Automated vehicles, actually, is a very good example where regulation can actually be a positive driver of growth because you've got a set of standards that everybody can buy into and, therefore, there's business certainty.
How to balance competing interests in AI policy
Michael Krigsman: Arsalan Khan comes back with another question, a very interesting point, talking about the balancing of competing goals and interests. If you force open those algorithmic black boxes then do you run the risk of infringing the intellectual property of the businesses that are doing whatever it is that they're doing?
Lord Tim Clement-Jones: Regulators are very used to dealing with these sorts of issues of inspection and audit. I think that it would be perfectly fine for them to do that and they wouldn't be infringing intellectual property because they wouldn't be exploiting it. They're be inspecting but not exploiting. I think, at the end of the day, that's fine.
Also, don't forget; we've got this great concept now. The regulators are much more flexible than they used to be of sandboxing.
Michael Krigsman: How do you balance the interests of corporations against the public good, especially when it comes to AI? Maybe give us some specific examples.
Lord Tim Clement-Jones: For instance, we're seeing that in the online situation with social media. We've got this big debate happening, for instance, on whether or not it's legitimate for Twitter to delist somebody in terms of their account with them. No doubt, the same is true with Facebook and so on.
Now, maybe I shouldn't talk about it not being fair to a social media platform to have to make those decisions but—because of all the freedom of speech issues—I'd much prefer to see a reasonably clear set of principles and regulations that's about when social media platforms actually ought to delist somebody.
We're developing that in the UK in terms of Online Harms so that social media will have certain duties of care towards certain parts of the community, particularly young people and the vulnerable. They will have a duty to actually not delist or take off content or what has been called detoxing the algorithm. We're going to try and get a set of principles where people are protected and social media platforms have a duty, but it isn't a blanket and it doesn't mean that social media have to make freedom of speech decisions in quite the same way.
Inevitably, public policy is a balance and the big problem is ignorance. It's ignorance on the part of the social media platforms as to why we would want to regulate them and it's ignorance on the part of politicians who actually don't understand the niceties of all of this when they're trying to regulate.
As you know, some of us are quite dedicated to joining it all up so people really do understand why we're doing these things and getting the right solutions. Getting the right solution in this online area is really tricky.
Of course, at the middle of it, and this is why it's relevant to AI, is the algorithm, is the pushing of messages in particular directions which are autonomous. We're back to this autonomous issue, Michael.
Sometimes, you need to say, "I'm sorry." You need to be a lot more transparent about how this is working. It shouldn't be working in that way, and you're going to have to change it.
Now, I know that's a big, big change of culture in this area, but it's happening and I think that with the new administration, Congress, and so on, I think we'll all be on the same page very shortly.
Michael Krigsman: I have to ask you about the concentration of power that's taken place inside social media companies. Social media companies, many of them born in San Francisco, technology central, and so the culture of technology, historically, has been, "Well, you know, we create tools that are beneficial for everyone, and leave us alone," essentially.
Lord Tim Clement-Jones: Well, that's exactly where I'm coming from in terms of that culture has to change now. There is an exception, so I think that if you talk to the senior people in the social media companies and the big platforms, they will now accept that actually the responsibility of having to make decisions about delisting people and so on or what content should be taken down is not something they feel very comfortable about and they're getting quite a lot of heat as a result of it. Therefore, I think increasingly they will welcome regulation.
Now, obviously, I'm not predicating what kind of regulation is appropriate outside the UK or what would be accepted but, certainly, that is the way it's worked with us and there's a huge consensus across parties that we need to have a framework for the social media operations. That it isn't just Section 230, as you know, which sort of more or less allows anything to happen. In that sense, you don't take responsibility as a platform. Well, you know, not that we've ever accepted that in full in Europe but, in the UK, certainly.
Now, we think that it's time for social media platforms to take responsibility but recognizing the benefits. Good heavens. I tweet like the next person. I'm on LinkedIn. I'm no longer on Facebook. I took Jaron Lanier's advice.
There are platforms that are out there which are the Wild West. We've heard about Parler as well. We need to pull it together pretty quickly, actually.
Digital ethics: The House of Lords AI Report
Michael Krigsman: We have some questions from Twitter. Let's just start going through them. I love taking questions from Twitter. They tend to be great questions.
You created the House of Lords AI Report. Were there any outcomes that resulted from that? What did those outcomes look like?
Lord Tim Clement-Jones: Somebody asked me and said, "What was the least expected outcome?" I expected the government to listen to what we had to say and, by and large, they did.
To a limited extent, in terms of coordination, they haven't moved very fast on skills. Again, to touch on skills, they haven't moved nearly fast enough on skills.
They haven't moved fast enough on education and digital understanding, although, we've got a new kind of media literacy strategy coming down the track in the UK. Some of that is due to the pandemic but, actually, it's a question of energy and so on.
They've certainly done well in terms of the climate in terms of the search investment and in terms of the kind of nearer to market type of encouragement that they've given. So, I would score their card at about six out of ten. They've done well there.
They sort of said, "Yes, we accept your ethical AI, your trustworthy AI message," which was a core of what we were trying to say. They also accepted the diversity message. In fact, if I was going to say where they've performed best in terms of taking it on board, it's this diversity in the AI workforce, which I think is the biggest plus.
The really big plus has been the way the private sector in the UK has taken on board the messages about trustworthy AI, ethical AI. Now, techUK, which is our overarching trade body in the UK, they now have a regular annual conference about ethics and AI, which is fantastic. They're genuinely engaged.
In a sense, the culture of the app developer, the AI app developer, really encompasses ethics now. We don't have this kind of hypocritic oath for developers but, certainly, the expectations are that developers are much more plugged into the principles by which they are designing artificial intelligence. I think that will continue to grow.
The education role that techUK has played with their members has been fantastic and is a general expectation (across the board) by our regulators. We've reinforced each other, I think, probably, in that area, which I think has been very good because, let's face it, the people who are going to develop the apps are the private sector.
The public sector, by and large, procure these things. They've had sets of ethical principles now for procurement that they've put in place: World Economic Forum principles, data-sharing frameworks, and so on, or ethical data sharing frameworks.
Generally, I think we've seen a fair bit of progress. But we did point out in our just most recent report where they ran the risk of being complacent and we warned against that, basically.
Michael Krigsman: We have a really interesting question from Wayne Anderson. Wayne makes the point that it's difficult to define digital ethics at scale because of the competing interests across society that you've been describing. He said, "Who owns this decision-making, ultimately? Is it the government? Is it the people? How does it manifest? And who decides what AI is allowed to do?"
Lord Tim Clement-Jones: That's exactly my risk-based approach. It depends on what the application is. You do not want a big brother type government approach to every application of AI. That would be quite stupid. They couldn't cope anyway and it would just restrict innovation.
What you have to do—and this is back to my risk assessment approach—you have to say, "What are the areas where there's potential of detriment to the citizens, to the consumers, to society? What are those areas and then what do we do about them? What are the highest risks?"
I think that is a proportionate way of looking at dealing with AI. That is the way forward for me, and I think it's something we can agree on, basically, because risk is something that we understand. Now, we don't always get the language right, but that's something I think we can agree on.
Michael Krigsman: Wayne Anderson follows up with another very interesting question. He says, "When you talk about machine learning and statistical models, it's not soundbite friendly. To what degree is ignorance of the problem and the nature of what's going on and the media inflaming the challenges here?"
Lord Tim Clement-Jones: The narrative of AI is one of the most difficult and the biggest barriers to understanding: public understanding, understanding by developers, and so on.
Unfortunately, we're victims in the West of a sort of 3,000-year-old narrative. Kumar wrote about robots. Jason and the Argonauts had to escape from a robot walking around the Isle of Crete. That was 3,000 years ago.
It's been in our myths. We've had Frankenstein, the Prague Golem, you name it. We are frightened, societally existentially frightened by "other," by the "other," by alien creatures.
We think of AI as embedded in physical form, in robots, and this is the trouble. We've seen headlines about terminator robots.
For instance, when we launched our House of Lords report, we had headlines about House of Lords saying there must be an ethical code to prevent terminator robots. You can't get away from the narrative, so you have to double up and keep doubling up on the public trust in terms of the reassurance about the principles that are applied, about the benefits of AI applications, and so on.
This is why I raised the GM foods point because—let's face it—without much narrative about GM foods, they were called Frankenfoods. They didn't have a thousand years of history about aliens, but we do in AI, so the job is bigger.
Impact of AI on society and employment
Michael Krigsman: Any conversation around AI ethics must include a discussion of the economic impacts of AI on society and the displacement, worker displacement, and economic displacements that are taking place. How do we bring that into the mix?
Lord Tim Clement-Jones: There are different forecasts and we have to accept the fact that some people are very pessimistic about the impact on the workforce of artificial intelligence and others who are much more sanguine about it. But there are choices to be made.
We have been here before. If you look at 5th Avenue in 1903, what do you see? You see all horses. If you look at 5th Avenue in 1913, you see all carriages. I think you see one horse in the photograph.
This is something that society can adjust to but you have to get it right in terms of reskilling. One of the big problems is that we're not moving fast enough.
Not only is it about education in schools—which is not just scientific and technological education—it's about how we use AI creatively, how we use it to augment what we do, to add to what we do, not just simply substitute for what we do. There are creative ways we need to learn about in terms of using AI.
Then, of course, we have to recognize that we have to keep reinventing ourselves as adults. We can't just expect to have the same job for 30 years now. We have to keep adjusting to the technology as it comes along.
To do that, you can't just do it by yourself. You have to have—I don't know—support from government like a life-long learning account as if you were getting a university loan or grant. You've got to have employers who actually make the effort to make sure that their worker skills don't simply become obsolete. You've got to be on the case for that sort of thing. We don't want a kind of digital rustbelt in all of this.
We've got to be on the case and it's a mixture of educators, employers, government, and individuals, of course. Individuals have to have the understanding to know that they can't just simply take a job and be there forever.
Michael Krigsman: Again, it seems like there's this balancing that's taking place. For example, in the role of government in helping ease this set of economic transitions but, at the same time, recognizing that there will be pain and that individuals also have to take responsibility. Do I have that right, more or less?
Lord Tim Clement-Jones: Absolutely. I'm not a great fan of the government doing everything for us because they don't always know what they need to do. To expect government to simply solve all the problems with a wave of the financial wand, I think, is unreasonable.
But I do think this is a collaboration that needs to take place. We need to get our education establishment—particularly universities and further education in terms of pre-university colleges and, if you like, those developing different kinds of more practical skills—involved so that we actually have an idea about the kinds of skills we're going to need in the future. We need to continually be looking forward to that and adjusting our training and our education to that.
At the moment, I just don't feel we're moving nearly fast enough. We're going to wake up with a dreadful hangover (if we're not careful) with people without the right skills but the jobs can't be filled and, yet, we have people who can't get jobs.
This is a real issue. I'm not one of the great pessimists. I just think that, at any pace, we have a big challenge.
Michael Krigsman: We also need to talk about COVID-19. Where are you, in the UK, dealing with this issue? As somebody in the House of Lords, what is your role in helping manage this?
Lord Tim Clement-Jones: My job is to push and pull and kick and shove and try and move government on, but also be a bit of a hinge between the private sector, academia, and so on. We've got quite a community now of people who are really interested in artificial intelligence, the implications, how we further it to public benefit, and so on. I want to make sure that that community is retained and that government ministers actually listen to that community and are a part of that community.
Now, you know I get frustrated sometimes because government doesn't move as fast as we all want it to sometimes. Algorithmic decision-making in government, our government hasn't yet woken up to the need to have a fairly clear governance and compliance framework, but they'll come along. I'd love it if they were a bit faster, but I've still got enough energy to keep pushing them as fast as I can go.
Michael Krigsman: Any thoughts on what the post-pandemic work world will look like?
Lord Tim Clement-Jones: [Loud exhale] I mean this is the existential fret because, if you like, of the combination of COVID and the acceleration of remote working, particularly where lightbulbs have gone off in a lot of board rooms about what is possible now in terms of use of technology, which weren't there before. If we're not careful, and if people don't make the right decisions in those boardrooms, we're going to find substitution by technology of people taking place to quite a high degree without thinking about how the best combination between technology and humans work, basically. It's just going to be seen as, "Well, we can save costs and so on," without thinking about the human implications.
If I were going to issue any kind of gypsy's warning, that's what I'd say is that, actually, we're going to find ourselves in a double whammy after the pandemic because of new technology being accelerated. All those forecasts, actually, are going to come through quicker than we thought if we're not careful.
Michael Krigsman: Any final closing thoughts as we finish up?
Lord Tim Clement-Jones: I use the word "community" a fair bit, but what I really like about the world of AI (in all its forms) whatever we're interested in—skills, ethics, regulation, risk, development, benefit, and so on—is the fact that we're a tribe of people who like discussing these things, who want to see results, and it's international. I really do believe that the kind of conversation you and I have had today, Michael, is really important in all of this. We've got international institutions that are sharing all this.
The worst thing would be if we had a race to the bottom with AI and its principles. "Okay, no, we won't have that because that's going to damage our competitiveness," or something. I think I would want to see us collaborate very heavily, and they're used to that in academia. We've got to make sure that happens in every other sphere.
Michael Krigsman: All right. Well, a very fast-moving conversation. I want to say thank you to Lord Tim Clement-Jones, CBE, for taking time to be with us today. Thank you for coming back.
Lord Tim Clement-Jones: Pleasure. Absolute pleasure, Michael.
https://www.cxotalk.com/episode/house-lords-member-talks-ai-ethics-social-impact-governance
UK at risk without a national data strategy
Leading peers on the House of Lords Select Committee on Artificial Intelligence worry that the UK will not benefit or control AI as national data strategy is delayed.
IDG Connect | MAR 21, 2021 11:30 PM PDT
The UK has no national data strategy, which places the businesses and citizens of the European country at risk, according to the chair of the House of Lords Select Committee on Artificial Intelligence (AI). A national data strategy was promised in the autumn of 2020, but the chair of the AI Select Committee says a government consultation programme that closed in December 2020 was too shallow to provide the UK with the framework needed to derive economic, societal and innovative benefit.
“The National Data Strategy has been delayed and will report in small parts, which will not encourage debate,” says Lord William Wallace, a Cabinet spokesperson in the House of Lords, the second chamber of British politics. Lord Wallace and his fellow Liberal Democrat party peer Lord Tim Clement-Jones are at the forefront of a campaign within the corridors of British political power to get the National Data Strategy debated properly by those it will impact - UK businesses and citizens - and then in practice under the leadership of a UK government Chief Data Officer.
“The questions in the consultation were closed in nature and very much suggested the government already had a view and did not want to encourage debate,” Lord Wallace adds. The current government, which has been in place since 2010, has been incredibly vocal over the last decade of the importance of data to the UK. “They talk of nothing else and set up bodies like NHSX, and Dominic Cummings was a big fan of data,” Wallace says of the former advisor to Vote Leave, the Conservative Party and Prime Minister Boris Johnson. Lord Tim Clement-Jones worries that the attitudes of Cummings - who was forced out of the government in late 2020 - have coloured the government’s approach to a national data strategy. “He treated data as a commodity, and if data is in the hands of somebody that sees it as a commodity, it will not be protected, and that is not good for society. Palantir has a very similar view; the data is not about citizen empowerment,” Lord Clement-Jones says of the US data firm that was working on a UK Covid-19 data store.
“A small minority of politicians are following this issue, and the National Data Strategy is under the remit of the Department for Culture Media and Sport (DCMS), which is not the most powerful department in the Cabinet,” Lord Wallace says.
In December, the House of Lords Select Committee on Artificial Intelligence published a report: AI in the UK: No Room for Complacency, which called for the establishment of a Cabinet Committee “to commission and approved a five-year strategy for AI…ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.”
Lord Clement-Jones says a Cabinet-level committee is vital due to the ad hoc status of the committee he chairs. In addition, the rate of AI growth requires the government to pay close attention to the detail and impact of AI. As the report revealed: “in 2015, the UK saw £245 million invested in AI. By 2018, this had increased to over £760 million. In 2019 this was £1.3 billion...It is being used to help tackle the COVID-19 pandemic, but is also being used to underpin facial recognition technology, deep fakes, and other ethically challenging uses.”
“One of the big issues for us is, where do you draw the line for public usage? AI raises lots of issues, and as a select committee, we are navigating the new world of converging technologies such as the Internet of Things, cloud computing and the issue of sovereignty. And we have seen in the last few months that this government will subordinate all sorts of issues to sovereignty,” Lord Clement-Jones says. Adding that as a result of the sovereignty debate, businesses on both sides of the channel have lost vital mutual benefits.
“You have to look at these issues incredibly carefully. If people are too cavalier about things, like the Home Office has been over work permits, then it's very concerning...Take the recent trade deal with Japan, it is not at all clear that UK health data is part of this deal, and the government is walking blindly into this stuff,” Lord Clement-Jones says.
Data adequacy between the UK and Europe ends in June 2021 and a number of CIOs report concerns about the loss of existing data standards and protocols with the UK’s largest trading partner. “Relationships between government and business are very poor,” the Lord adds.
Despite the attitude of “F**k business” from English Prime Minister Boris Johnson, Lord William Wallace says there is a vibrant debate about data and ethics amongst the UK business and technology community, which has to be harnessed because he says, data is not debated enough in politics or the mainstream media. “We only hear the lurid headlines about Cambridge Analytica and never the benefits this technology offers.”
Data did, momentarily, become mainstream during the worst periods of the pandemic, with local government and health agencies revealing that they were not being given full access to Covid-19 data by the central government. “The over-centralisation is very much part of the problem, we have not used public health authorities effectively, for example,” Lord Wallace says. He adds that how local and national governments collect and release data to one another needs to be discussed and addressed. “We have some really powerful combined authorities in the UK now, and their data is really granular,” adding that now GPs and local health bodies are in charge of the UK Covid vaccination programme, successful results are being delivered. Centralisation of the initial pandemic response in the UK has led to the highest death toll in Europe and one of the highest mortality rates in the world.
Global standing
As the UK exited the European Union, there was a narrative from Boris Johnson that the UK’s trading future would be closely aligned with the USA, but with Johnson’s close ally Donald Trump losing the US presidential election in 2020, the two Lords wonder if Johnson can be so assured, especially when it comes to data, and they worry about the impact on British business. “The government don’t stop to look at where data flows,” Lord Clement Jones says of the poor business relationship leading to a poor understanding. On the USA, they believe the new Biden administration will have to move towards greater data protection. On the flip side of this, Lord Wallace points out that the government has been championing the UK’s role in the Five Eyes security services pact, but it is not clear if the USA security services is able to carry out mass data collection in the UK from the shared intelligence centre Menwith Hill in the UK, claiming that there is no written agreement between the two nations.
It is for this reason the two Lords believe it is vital that the UK engages in a national debate about data’s benefits and public concerns. “The public are most scared about health data as it is the one they are most aware of, yet the debate about the government’s collection of data is absent from public debate,” Lord Wallace says. Lord Clement-Jones adds that he is concerned that there is a danger of public distrust growing. “So now it is about how do we create a debate so that we create a circular flow of data that benefits society and involves important and respected organisations like the Ada Lovelace Institute, Big Brother Watch and the Open Data Institute?”
“The UK remains an attractive place to learn, develop, and deploy AI. It has a strong legal system, coupled with world-leading academic institutions, and industry ready and willing to take advantage of the opportunities presented by AI,” concludes the Lord’s report AI in the UK: No Room for Complacency.
https://www.idgconnect.com/article/3611769/uk-at-risk-without-a-national-data-strategy.html