Regulating the Internet
Lord C-J discussion with Tom Ascott of the Online Harms Foundation March 21.
As technology, AI and the internet weave themselves deep into our societies, Synthetic Society is here to help you untangle and understand the mess. Listen to interviews with leading experts, political figures, and entrepreneurs, guiding us through complex issues
This week our guest is Lord Clement-Jones, and we’ll be talking about if technology is becoming more polarised, how the government can lead on regulation for the internet and artificial intelligence, and the relevance of culture and the arts in the digital space.
https://syntheticsociety.libsyn.com/regulating-the-internet
We Need a Legal and Ethical Framework for Lethal Autonomous Weapons
As part of a recent Defence Review, our Prime Minister has said that the UK will invest another £1.5 billion in military research and development designed to master the new technologies of warfare and establish a new Defence Centre for AI. The head of the British Army, recently said that he foresees the army of the future as an integration of “boots and bots”.
The Government however have not yet explained how legal and ethical frameworks and support for personnel engaged in operations will also change as a consequence of the use of new technologies, particularly autonomous weapons, which could be deployed by our armed forces or our allies.
The final report of the US National Security Commission on Artificial Intelligence, published this March however considered the use of autonomous weapons systems and risks associated with AI-enabled warfare and concluded that “The U.S. commitment to IHL” - international humanitarian law - “is long-standing, and AI-enabled and autonomous weapon systems will not change this commitment.”
The UN Secretary General, António Guterres goes further and argues: “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. Yet we still have no international limitation agreement.
In company with a former Secretary of State for Defence and a former Chief of Defence Staff I recently argued in Parliament for a review of how legal and ethical frameworks need to be updated in response to novel defence technologies. This is my speech in which I pointed out the slow progress being made by the UK Government in addressing these issues.
In a written response subsequent to the debate, the Minister stated that whilst there is a NATO definition of “automated system” and “autonomous system”, the UK Ministry of Defence has no operative definition of Lethal Autonomous Weapon Systems or "LAWS". Given that the most problematic aspect – autonomy - HAS been defined that is an extraordinary state of affairs.
A few years ago, I chaired the House of Lords Select Committee on AI which considered the economic, ethical and social implication of advances in artificial intelligence. In our Report published in April 2018 entitled ‘AI in the UK: Ready, willing and able’ we addressed the issue of military use of AI and stated that 'perhaps the most emotive and high stakes area of AI development today is its use for military purposes’ recommending that this area merited a ‘full inquiry on its own.’ (para 334)
As the Noble Lord Browne of Ladyton has made plain, regrettably, it seems not to have yet attracted such an inquiry or even any serious examination. I am therefore extremely grateful to the Noble Lord for creating the opportunity to follow up on some of the issues we raised in connection with the deployment of AI and some of the challenges we outlined.
It’s also a privilege to be a co-signatory with the Noble and Gallant Lord HAWTON Houghton too who has thought so carefully about issues involving the human interface with military technology.
The broad context of course, as the Noble Lord Browne has said, are the unknowns and uncertainties in policy, legal and regulatory terms that new technology in military use can generate.
His concerns about complications and the personal liabilities to which it exposes deployed forces are widely shared by those who understand the capabilities of new technology. All the more so in a multinational context where other countries may be using technology which either we would not deploy or the use of which could create potential vulnerabilities for our troops.
Looking back to our Report, one of the things that concerned the Committee more than anything else was the grey area surrounding the definition of lethal autonomous weapon systems or LAWS.
As the Noble Lord Browne has said as the Committee: explored the issue, we discovered that the UK’s then definition which included the phrase “An autonomous system is capable of understanding higher-level intent and direction" was clearly out of step with the definitions used by most other governments and imposed a much higher threshold on what might be considered autonomous.
This allowed the government to say “the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all”
Our committee concluded that, ”In practice, this lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry”.
This was particularly in the light of the fact that at the UN’s Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) in 2017 the UK had opposed the proposed international ban on the development and use of autonomous weapons.
We therefore recommended that the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world.
The Government in their response to the Committee’s Report in June 2018 however replied: The Ministry of Defence “has no plans to change the definition of an autonomous system”.
It did say however: “The UK will continue to actively participate in future GGE meetings, trying to reach agreement at the earliest possible stage.”
Later, thanks to the Liaison Committee we were able - on two occasions last year - to follow up on progress in this area.
On the first occasion, in reply to the Liaison Committee’s letter of last January which asked “What discussions have the Government had with international partners about the definition of an autonomous weapons system, and what representations have they received about the issues presented with their current definition?”
the government replied:
“There is no international agreement on the definition or characteristics of autonomous weapons systems. HMG has received some representations on this subject from Parliamentarians ……” and has discussed it during meetings of the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), an international forum which brings together expertise from states, industry, academia and civil society.
The GGE is yet to achieve consensus on an internationally accepted definition and there is therefore no common standard against which to align. As such, the UK does not intend to change its definition.”
So no change there my lords until later in the year……. December 2020 when the Prime Minister announced the creation of the Autonomy Development Centre to “accelerate the research, development, testing, integration and deployment of world-leading artificial intelligence and autonomous systems”
In the follow up Report “AI in the UK: No Room for Complacency” published in the same month, we concluded: “We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.”
The response to this last month was a complete about turn by the Government. They said:
“We agree that the UK must be able to participate in international debates on autonomous weapons, taking an active role as moral and ethical leader on the global stage, and we further agree the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies.
“In recent years the MOD has subscribed to a number of definitions of autonomous systems, principally to distinguish them from unmanned or automated systems, and not specifically as the foundation for an ethical framework. On this aspect, we are aligned with our key allies.
Most recently, the UK accepted NATO’s latest definitions of “autonomous” and “autonomy”, which are now in working use within the Alliance. The Committee should note that these definitions refer to broad categories of autonomous systems, and not specifically to LAWS. To assist the Committee, we have provided a table setting out UK and some international definitions of key terms.”
The NATO definition sets a much less high bar as to what is considered autonomous: “A system that decides and acts to accomplish desired goals within defined parameters, based on acquired knowledge and an evolving situational awareness, following an optimal but potentially unpredictable course of action.”
The Government went on to say: “The MOD is preparing to publish a new Defence AI Strategy and will continue to review definitions as part of ongoing policy development in this area.”
Now, I apologize for taking my Noble Lords at length through this exchange of recommendation and response but if nothing else it does demonstrate the terrier-like quality of Lords Select Committees in getting responses from government.
This latest response is extremely welcome. But in the context of the Lord Brown’s amendment and the issues we have raised we need to ask a number of questions now: What are the consequences of the MOD’s fresh thinking?
What is the Defence AI Strategy designed to achieve. Does it include the kind of enquiry our Select Committee was asking for?
Now that we subscribe to the common NATO definition of LAWS will the Strategy in fact deal specifically with the liability and international and domestic legal and ethical framework issues which are central to this amendment?
If not my Lords, then a review of the type envisaged by this amendment is essential.
The final report of the US National Security Commission on Artificial Intelligence referred to by the Noble Lord Browne has for example taken a comprehensive approach to the issues involved. The Noble Lord has quoted three very important conclusions and asked whether the government agrees in respect of our own autonomous weapons.
There are three further crucial recommendations made by the Commission:
“The United States must work closely with its allies to develop standards of practice regarding how states should responsibly develop, test, and employ AI-enabled and autonomous weapon systems.”
And “The United States should actively pursue the development of technologies and strategies that could enable effective and secure verification of future arms control agreements involving uses of AI technologies.”
And of particular importance in this context “countries must take actions which focus on reducing risks associated with AI enabled and autonomous weapon systems and encourage safety and compliance with IHL (international Humanitarian Law) when discussing their development, deployment, and use”.
Will the Defence AI Strategy or indeed the Integrated Review undertake as wide an enquiry? Would it come to the same or similar conclusions?
My Lords, the MOD it seems has moved some way to getting to grips with the implications of autonomous weapons in the last three years. If it has not yet considered the issues set out in the amendment, it clearly should, as soon as possible, update the legal frameworks for warfare in the light of new technology, or our service personnel will be at considerable legal risk. I hope it will move further in response to today’s short debate.
What will the year of the Ox bring in UK China Relations
Lord C-J: 2020 has brought huge problems to the fore - particularly the COVID19 virus, by no means over and we hope the WHO mission to Wuhan sheds light on how to prevent further viruses from emerging and spreading. Meanwhile repression in Xinjiang and the National Security Law in Hong Kong have damaged confidence in our bilateral relations.
Saying No to Internal Vaccine Passports
https://bigbrotherwatch.org.uk/2021/04/70-mps-launch-cross-party-campaign-against-covid-passes/
70+ MPS LAUNCH CROSS-PARTY CAMPAIGN AGAINST COVID PASSE
BIG BROTHER WATCH TEAM / APRIL 2, 2021
Further to Sir Keir’s comments about vaccine passports being un-British and against the “British instinct”, over 70 MPs have launched a cross-party campaign opposing their “divisive and discriminatory use”.
MPs and peers from Labour, the Liberal Democrats and Conservative parties have signed a pledge to oppose the move.
Signatories include former Labour leader, Jeremy Corbyn, Labour MPs Dawn Butler and Rebecca Long Bailey, former director of Liberty, Baroness Shami Chakrabarti, and over 40 MPs from the Conservative Covid Recovery Group.
They have been supported by campaign groups Big Brother Watch, Liberty, the Joint Council for the Welfare of Immigrants (JCWI) and Privacy International.
The pledge states:
“We oppose the divisive and discriminatory use of COVID status certification to deny individuals access to general services, businesses or jobs.”
QUOTES
Baroness Chakrabarti said:
“International travel is a luxury but participating in your own community is a fundamental right. So internal Covid passports are an authoritarian step too far. We don’t defeat the virus with discrimination and oppression but with education, vaccination and mutual support.”
Leader of the Liberal Democrats, Ed Davey MP said:
“As we start to get this virus properly under control we should start getting our freedoms back, vaccine passports – essentially Covid ID cards – take us in the other direction.
“Liberal Democrats have always been the party for civil liberties, we were against ID cards when Blair tried to introduce them and we are against them now.
“I’m pleased Big Brother Watch is helping drive forward a growing consensus against Covid ID cards in our politics. Now I hope we can start to turn the tide on the creeping authoritarianism we are seeing from Number 10 across a broad range of issues.”
Sir Graham Brady MP said:
“Covid-Status Certification would be divisive and discriminatory. With high levels of vaccination protecting the vulnerable and making transmission less likely, we should aim to return to normal life, not to put permanent restrictions in place.”
Silkie Carlo, director of Big Brother Watch said:
“Our common goal is to emerge from lockdown – healthy, safe and free. But we won’t arrive at freedom through exclusion. Covid passes would be the first attempt at segregation in Britain for many decades, dividing communities without reducing the risks. We are in real danger of becoming a check-point society where anyone from bouncers to bosses could demand to see our papers. We cannot let this Government create a two-tier nation of division, discrimination and injustice.”
Minnie Rahman, Campaigns Director at JCWI, said:
“The Hostile Environment, which is built on identity checks, has already been proven to cause discrimination against migrants and people of colour. On top of this, migrant communities face significant barriers to accessing the vaccine. Any recovery plan which risks increasing racial discrimination and purposefully leaves people behind is doomed to fail.”
Sam Grant, Head of Policy and Campaigns at Liberty, said:
“We all want to get out of the pandemic as quickly as possible, but we need to do so in a way that ensures we don’t enter a ‘new normal’ which diminishes the rights and liberties we took for granted before the COVID crisis.
“Any passport system has the potential to create a two-tier society, and risk further marginalising people who are already discriminated against and cut off from vital services. Vaccine passports would allow ID systems by stealth, entrenching inequality and division.
“We need strategies that support and enable people to follow public health guidance, including vaccination, alongside more support for the most marginalised who have suffered the sharp edge of the Government’s focus on criminal justice over public health. We won’t get out of this pandemic by entrenching inequality, but only by protecting everyone.”
ENDS
Full list of signatories:
Labour Party
Diane Abbot MP
Bell Ribeiro-Addy MP
Tahir Ali MP
Rebecca Long Bailey MP
Clive Lewis MP
Beth Winter MP
Rachel Hopkins MP
Apsana Begum MP
Richard Burgon MP
Ian Byrne MP
Dawn Butler MP
Jeremy Corbyn MP
Mary Kelly Foy MP
Ian Lavery MP
Ian Mearns MP
John McDonnell MP
Grahame Morris MP
Kate Osborne MP
Zarah Sultana MP
Claudia Webbe MP
Mick Whitley MP
Nadia Whittome MP
Baroness Chakrabarti
Baroness Bryan of Partick
Lord Woodley
Lord Sikka
Lord Hendy
Liberal Democrats
Ed Davey MP
Layla Moran MP
Munira Wilson MP
Alistair Carmichael MP
Daisy Cooper MP
Wendy Chamberlain MP
Sarah Olney MP
Christine Jardine MP
Jamie Stone MP
Tim Farron MP
Lord Scriven
Lord Strasburger
Lord Tyler
Lord Clement-Jones
Conservative Party
Mark Harper MP
Steve Baker MP
Sir Iain Duncan Smith MP
Harriett Baldwin MP
Esther McVey MP
Adam Afriyie MP
Bob Blackman MP
Sir Graham Brady MP
Nus Ghani MP
Andrew Mitchell MP
Peter Bone MP
Ben Bradley MP
Andrew Bridgen MP
Paul Bristow MP
Philip Davies MP
Richard Drax MP
Jonathan Djanogly MP
Chris Green MP
Philip Hollobone MP
Adam Holloway MP
David Jones MP
Simon Jupp MP
Andrew Lewer MBE MP
Julian Lewis MP
Karl McCartney MP
Craig Mackinlay MP
Anthony Mangnall MP
Stephen McPartland MP
Anne Marie Morris MP
Sir John Redwood MP
Andrew Rosindell MP
Greg Smith MP
Henry Smith MP
Julian Sturdy MP
Sir Desmond Swayne MP
Sir Robert Syms MP
Craig Tracey MP
Jamie Wallis MP
David Warburton MP
William Wragg MP
Sir Charles Walker MP
NGOs/Businesses
Big Brother Watch
Liberty
Migrants Organise
Joint Council for the Welfare of Immigrants
medConfidential
Privacy International
Lord C-J Questions Touring Negotiation Failure
I questioned the Government on its total failure to negotiate a deal with the EU. It is clear the Home Office refused to grant EU citizens 90 day Permitted Paid Engagement. Touring musicians and creative artists have just been sacrificed on the altar of Tory immigration policy
My Lords, touring musicians and creative artists are deeply angry at this negotiating failure. Is not the root of the problem refusal by the Home Office to extend permitted paid engagement here to 90 days for EU artists, meaning as a result that work permits will now be required in many member states for our artists? Will the Government urgently rethink this and renegotiate on the instrument and equipment carnet and on trucking issues?
Online Harms: The Need for Early Legislation
This is what I said when (finally) the Government made its response to the White Paper Consultation in January
My Lords, over three years have elapsed and three Secretaries of State have come and gone since the Green Paper, in the face of a rising tide of online harms, not least during the Covid period, as Ofcom has charted. On these Benches, therefore, we welcome the set of concrete proposals we finally have to tackle online harms through a duty of care. We welcome the proposal for pre-legislative scrutiny, but I hope that there is a clear and early timetable for this to take place.
As regards the ambit of the duty of care, children are of course the first priority in prevention of harm, but it is clear that social media companies have failed to tackle the spread of fake news and misinformation on their platforms. I hope that the eventual definition in the secondary legislation includes a wide range of harmful content such as deep fakes, Holocaust denial and anti-Semitism, and misinformation such as anti-vax and QAnon conspiracy theories.
I am heartened too by the Government’s plans to consider criminalising the encouragement of self-harm. I welcome the commitment to keeping a balance with freedom of expression, but surely the below-the-line exemption proposed should depend on the news publisher being Leveson-compliant in how it is regulated. I think I welcome the way that the major impact of the duty of care will fall on big-tech platforms with the greatest reach, but we on these Benches will want to kick the tyres hard on the definition, threshold and duties of category 2 to make sure that this does not become a licence to propagate serious misinformation by some smaller platforms and networks.
I welcome the confirmation that Ofcom will be the regulator, but the key to success in preventing online harms will be whether Ofcom has teeth. Platforms Toggle showing location ofColumn 1711will need to demonstrate how they have reduced the “reasonably foreseeable” risk of harm occurring from the design of their services. In mitigating the risk of “legal but harmful content”, this comes down to the way in which platforms facilitate and even encourage the sharing of extreme or sensationalist content designed to cause harm. As many excellent bodies such as Reset, Avaaz and Carnegie UK have pointed out—as the noble Lord, Lord Stevenson, said, the latter is the begetter of the duty of care proposal—this means having the power of compulsory audit. Inspection of the algorithms that drive traffic on social media is crucial.
Will Ofcom be able to make a direction to amend a recommender algorithm, how a “like” function operates and how content is promoted? Will it be able to inspect the data by which the algorithm trains and operates? Will Ofcom be able to insist that platforms can establish the identity of a user and address the issue of fake accounts, or that paid content is labelled? Will it be able to require platforms to issue fact-checked corrections to scientifically inaccurate posts? Will Ofcom work hand in hand with the Internet Watch Foundation? International co-ordination will be vital.
Ofcom will also need to work closely with the CMA if the Government are to protect vulnerable victims of online scams, fraud, and fake and misleading online reviews, if they are explicitly excluded from this legislation. Ofcom will need to work with the ASA to regulate harmful online advertising, as well. It will also need to work with the Gambling Commission on the harms of online black-market gambling, as was highlighted yesterday by my noble friend Lord Foster.
How will this new duty of care mesh with compliance with the age-appropriate design code, regulated by the ICO? As the noble Lord, Lord Stevenson, has mentioned, the one major fudge in the response is on age verification. The proposals do not meet the objectives of the original Part 3 of the Digital Economy Act. We were promised action when the response arrived, but we have a much watered-down proposal. Pornography is increasingly available and accessible to young people on more sites than just those with user-generated content. How do the Government propose to tackle this ever more pressing problem? There are many other areas that we will want to examine in the pre-legislative process and when the Bill comes to this House.
As my honourable friend Jamie Stone pointed out in the Commons yesterday, a crucial component of minimising risk online is education. Schools need to educate children about how to use social media responsibly. What commitment do the Government have to online media education? When will the strategy appear and what resources will be devoted to it?
These are some of the yet unanswered questions before the draft legislation arrives, but I hope that the Government commit to a full debate early in the new year so that some of these issues can be unpacked at the same time as the pre-legislative scrutiny process starts.
Lord C-J : Give Musicians the Freedom to Tour
At a recent debate colleagues and I heavily criticized the Government’s failure to secure a cultural exemption from cabotage rules in the EU trade negotiation
My Lords, I join with other noble Lords in pointing out that the issues on cabotage are part of a huge cloud now hanging over the creative sector, including the requirement for work permits or visa exemptions in many EU countries, CITES certificates for musical instruments, ATA carnets for all instruments and equipment, and proof of origin requirements for merchandise. Cabotage provisions in the EU-UK Trade and Co-operation Agreement will mean that performers’ European tours will no longer be viable, because the agreement specifies that hauliers will be able to make only two journeys within a trip to the EU. Having to return to the UK between unloading sites in the EU will have a significant negative impact on the UK’s cultural exports and associated jobs.
A successful UK transport industry dedicated to our creative industries is at risk of relocation to the EU, endangering British jobs and jeopardising the attractiveness of the UK as a culture hub, as support industries will follow the companies that relocate to the EU. What proposals do the Government have for a negotiated solution, such as they have heard about today, that will meet their needs?
Prime Minister Sacrificing Our Creative Industries on the Altar of Sovereignty
Lord C-J on the Brexit betrayal of our creative industries
COVID-19, Artificial Intelligence and Data Governance: A Conversation with Lord Tim Clement-Jones
BIICL June 2020
https://youtu.be/sABSaAkkyrI
This was the first in a series of webinars on 'Artificial Intelligence: Opportunities, Risks, and the Future of Regulation'.
In light of the COVID-19 outbreak, governments are developing tracing applications and using a multitude of data to mitigate the spread of the virus. But the processing, storing, use of personal data and the public health effectiveness of these applications require public trust and a clear and specific regulatory context.
The technical focus in the debate on the design of the applications - centralised v. decentralised, national v. global, and so on - obfuscates ethical, social, and legal scrutiny, in particular against the emerging context of public-private partnerships. Discussants focused on these issues, considering the application of AI and data governance issues against the context of a pandemic, national responses, and the need for international, cross border collaboration.
Lord Clement-Jones CBE led a conversation with leading figures in this field, including:
Professor Lilian Edwards, Newcastle Law School, the inspiration behind the draft Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates;
Carly Kind, Director of The Ada Lovelace Institute, which published the rapid evidence review paper Exit through the App Store? Should the UK Government use technology to transition from the COVID-19 global public health crisis;
Professor Peter Fussey, Research Director of Advancing human rights in the age of AI and the digital society at Essex University's Human Rights Centre;
Mark Findlay, Director of the Centre for Artificial Intelligence and Data Governance at Singapore Management University, which has recently published a position paper on Ethics, AI, Mass Data and Pandemic Challenges: Responsible Data Use and Infrastructure Application for Surveillance and Pre-emptive Tracing Post-crisis.
The event was convened by Dr Irene Pietropaoli, Research Fellow in Business & Human Rights, British Institute of International and Comparative Law.
Regulating artificial intelligence: Where are we now? Where are we heading?
By Annabel Ashby, Imran Syed & Tim Clement-Jones on March 3, 2021
https://www.technologyslegaledge.com/author/tclementjones/
Hard or soft law?
That the regulation of Artificial intelligence is a hot topic is hardly surprising. AI is being adopted at speed, news reports frequently appear about high-profile AI decision-making, and the sheer volume of guidance and regulatory proposals for interested parties to digest can seem challenging.
Where are we now? What can we expect in terms of future regulation? And what might compliance with “ethical” AI entail?
High-level ethical AI principles were made by the OECD, EU and G20 in 2019. As explained below, great strides were made in 2020 as key bodies worked to capture these principles in proposed new regulation and operational processes. 2021 will undoubtedly keep up this momentum as these initiatives continue their journey into further guidance and some hard law.
In the meantime, with regulation playing catch-up with reality (so often the case where technological innovation is concerned), industry has sought to provide reassurance by developing voluntary codes. While this is helpful and laudable, regulators are taking the view that more consistent, risk-based regulation is preferable to voluntary best practice.
We outline the most significant initiatives below, but first it is worth understanding what regulation might look like for an organisation using AI.
Regulating AI
Of course the devil will be in the detail, but analysis of the most influential papers from around the globe reveals common themes that are the likely pre-cursers of regulation. It reveals that conceptually, the regulation of AI is fairly straightforward and has three key components:
- setting out the standards to be attained;
- creating record keeping obligations; and
- possible certification following audit of those records, which will all be framed by a risk-based approach.
Standards
Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.
Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.
Records
To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.
Certification
A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.
Risk-based approach
Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.
Key initiatives and Proposals
Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:
OECD
The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.
Europe
Several significant proposals were published by key institutions in 2020.
In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.
In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.
Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.
United Kingdom
The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.
Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.
The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.
The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.
Regulators
Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.
The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.
Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.
2021 and beyond
In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.Print:EmailTweetLikeLinkedIn