The House of Lords recently debated the development of advanced artificial intelligence, associated risks and approaches to regulation
This is an edited version of what I said when winding up the debate
The narrative around AI swirls back and forwards in this age of generative AI, to an even greater degree than when our AI Select Committee conducted its inquiry in 2017-18—it is very good to see a number of members of that committee here today. For instance, in March more than 1,000 technologists called for a moratorium on AI development. This month, another 1,000 technologists said that AI is a force for good. We need to separate the hype from the reality to an even greater extent.
Our Prime Minister seems to oscillate between various narratives. One month we have an AI governance White Paper suggesting an almost entirely voluntary approach to regulation, and then shortly thereafter he talks about AI as an existential risk. He wants the UK to be a global hub for AI and a world leader in AI safety, with a summit later this year.
I will not dwell too much on the definition of AI. The fact is that the EU and OECD definitions are now widely accepted, as is the latter’s classification framework. We need to decide whether it is tool, partner or competitor. We heard today of the many opportunities AI presents to transform many aspects of people’s lives for the better, from healthcare to scientific research, education, trade, agriculture and meeting many of the sustainable development goals. There may be gains in productivity, or in the detection of crime.
However, AI also clearly presents major risks, especially reflecting and exacerbating social prejudices and bias, the misuse of personal data and undermining the right to privacy, such as in the use of live facial recognition technology. We have the spreading of misinformation, the so-called hallucinations of large language models and the creation of deepfakes and hyper-realistic sexual abuse imagery, as the NSPCC has highlighted, all potentially exacerbated by new open-source large language models that are coming. We have a Select Committee looking at the dilemmas posed by lethal autonomous weapons..We have major threats to national security. There is the question of overdependence on artificial intelligence—a rather new but very clearly present risk for the future.
We must have an approach to AI that augments jobs as far as possible and equips people with the skills they need, whether to use new technology or to create it. We should go further on a massive skills and upskilling agenda and much greater diversity and inclusion in the AI workforce. We must enable innovators and entrepreneurs to experiment, while taking on concentrations of power. We must make sure that they do not stifle and limit choice for consumers and hamper progress. We need to tackle the issues of access to semiconductors, computing power and the datasets necessary to develop large language generative AI models.
However, the key and most pressing challenge is to build public trust, as we heard from so many noble Lords, and ensure that new technology is developed and deployed ethically, so that it respects people’s fundamental rights, including the rights to privacy and non-discrimination, and so that it enhances rather than substitutes for human creativity and endeavour. Explainability is key, as the noble Lord, Lord Holmes, said. I entirely agree with the right reverend Prelate that we need to make sure that we adopt these high-level ethical principles, but I do not believe that is enough. A long gestation period of national AI policy-making has ended up producing a minimal proposal for:
“A pro-innovation approach to AI regulation”,
which, in substance, will amount to toothless exhortation by sectoral regulators to follow ethical principles and a complete failure to regulate AI development where there is no regulator.
Much of the White Paper’s diagnosis of the risks and opportunities of AI is correct. It emphasises the need for public trust and sets out the attendant risks, but the actual governance prescription falls far short and goes nowhere in ensuring where the benefit of AI should be distributed. There is no recognition that different forms of AI are technologies that need a comprehensive cross-sectoral approach to ensure that they are transparent, explainable, accurate and free of bias, whether they are in a regulated or an unregulated sector. Business needs clear central co-ordination and oversight, not a patchwork of regulation. Existing coverage by legal duties is very patchy: bias may be covered by the Equality Act and data issues by our data protection laws but, for example, there is no existing obligation for ethics by design for transparency, explainability and accountability, and liability for the performance of AI systems is very unclear.
We need to be clear, above all, as organisations such as techUK are, that regulation is not necessarily the enemy of innovation. In fact, it can be the stimulus and the key to gaining and retaining public trust around AI and its adoption, so that we can realise the benefits and minimise the risks. What I believe is needed is a combination of risk-based, cross-sectoral regulation, combined with specific regulation in sectors such as financial services, underpinned by common, trustworthy standards of testing, risk and impact assessment, audit and monitoring. We need, as far as possible, to ensure international convergence, as we heard from the noble Lord, Lord Rees, and interoperability of these standards of AI systems, and to move towards common IP treatment of AI products.
We have world-beating AI researchers and developers. We need to support their international contribution, not fool them that they can operate in isolation. If they have any international ambitions, they will have to decide to conform to EU requirements under the forthcoming AI legislation and ensure that they avoid liability in the US by adopting the AI risk management standards being set by the National Institute of Standards and Technology. Can the Minister tell us what the next steps will be, following the White Paper? When will the global summit be held? What is the AI task force designed to do and how? Does he agree that international convergence on standards is necessary and achievable? Does he agree that we need to regulate before the advent of artificial general intelligence?
As for the creative industries, there are clearly great opportunities in relation to the use of AI. Many sectors already use the technology in a variety of ways to enhance their creativity and make it easier for the public to discover new content.
But there are also big questions over authorship and intellectual property, and many artists feel threatened. Responsible AI developers seek to license content which will bring in valuable income. However, many of the large language model developers seem to believe that they do not need to seek permission to ingest content. What discussion has the Minister, or other Ministers, had with these large language model firms in relation to their responsibilities for copyright law? Can he also make a clear statement that the UK Government believe that the ingestion of content requires permission from rights holders, and that, should permission be granted, licences should be sought and paid for? Will he also be able to update us on the code of practice process in relation to text and data-mining licensing, following the Government’s decision to shelve changes to the exemption and the consultation that the Intellectual Property Office has been undertaking?
There are many other issues relating to performing rights, copying of actors, musicians, artists and other creators’ images, voices, likeness, styles and attributes. These are at the root of the Hollywood actors and screenwriters’ strike as well as campaigns here from the Writers’ Guild of Great Britain and from Equity. We need to ensure that creators and artists derive the full benefit of technology, such as AI-made performance synthetisation and streaming. I very much hope that the Minister can comment on that as well.
We have only scratched the surface in tackling the AI governance issues in this excellent debate, but I hope that the Minister’s reply can assure us that the Government are moving forward at pace on this and will ensure that a full debate on AI governance goes forward.
12th August 2022
Creating the best framework for AI in the UK
19th December 2021
Lord C-J : Protect Pure Maths
27th November 2021
Lord C-J: Why We Should Support Baroness Kidron’s Age Assurance Bill
10th September 2021
Opening the new AI and Innovation Centre at Buckingham University
5th April 2021