As part of a recent Defence Review, our Prime Minister has said that the UK will invest another £1.5 billion in military research and development designed to master the new technologies of warfare and establish a new Defence Centre for AI. The head of the British Army, recently said that he foresees the army of the future as an integration of “boots and bots”.
The Government however have not yet explained how legal and ethical frameworks and support for personnel engaged in operations will also change as a consequence of the use of new technologies, particularly autonomous weapons, which could be deployed by our armed forces or our allies.
The final report of the US National Security Commission on Artificial Intelligence, published this March however considered the use of autonomous weapons systems and risks associated with AI-enabled warfare and concluded that “The U.S. commitment to IHL” – international humanitarian law – “is long-standing, and AI-enabled and autonomous weapon systems will not change this commitment.”
The UN Secretary General, António Guterres goes further and argues: “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. Yet we still have no international limitation agreement.
In company with a former Secretary of State for Defence and a former Chief of Defence Staff I recently argued in Parliament for a review of how legal and ethical frameworks need to be updated in response to novel defence technologies. This is my speech in which I pointed out the slow progress being made by the UK Government in addressing these issues.
In a written response subsequent to the debate, the Minister stated that whilst there is a NATO definition of “automated system” and “autonomous system”, the UK Ministry of Defence has no operative definition of Lethal Autonomous Weapon Systems or “LAWS”. Given that the most problematic aspect – autonomy – HAS been defined that is an extraordinary state of affairs.
A few years ago, I chaired the House of Lords Select Committee on AI which considered the economic, ethical and social implication of advances in artificial intelligence. In our Report published in April 2018 entitled ‘AI in the UK: Ready, willing and able’ we addressed the issue of military use of AI and stated that ‘perhaps the most emotive and high stakes area of AI development today is its use for military purposes’ recommending that this area merited a ‘full inquiry on its own.’ (para 334)
As the Noble Lord Browne of Ladyton has made plain, regrettably, it seems not to have yet attracted such an inquiry or even any serious examination. I am therefore extremely grateful to the Noble Lord for creating the opportunity to follow up on some of the issues we raised in connection with the deployment of AI and some of the challenges we outlined.
It’s also a privilege to be a co-signatory with the Noble and Gallant Lord HAWTON Houghton too who has thought so carefully about issues involving the human interface with military technology.
The broad context of course, as the Noble Lord Browne has said, are the unknowns and uncertainties in policy, legal and regulatory terms that new technology in military use can generate.
His concerns about complications and the personal liabilities to which it exposes deployed forces are widely shared by those who understand the capabilities of new technology. All the more so in a multinational context where other countries may be using technology which either we would not deploy or the use of which could create potential vulnerabilities for our troops.
Looking back to our Report, one of the things that concerned the Committee more than anything else was the grey area surrounding the definition of lethal autonomous weapon systems or LAWS.
As the Noble Lord Browne has said as the Committee: explored the issue, we discovered that the UK’s then definition which included the phrase “An autonomous system is capable of understanding higher-level intent and direction” was clearly out of step with the definitions used by most other governments and imposed a much higher threshold on what might be considered autonomous.
This allowed the government to say “the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all”
Our committee concluded that, ”In practice, this lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry”.
This was particularly in the light of the fact that at the UN’s Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) in 2017 the UK had opposed the proposed international ban on the development and use of autonomous weapons.
We therefore recommended that the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world.
The Government in their response to the Committee’s Report in June 2018 however replied: The Ministry of Defence “has no plans to change the definition of an autonomous system”.
It did say however: “The UK will continue to actively participate in future GGE meetings, trying to reach agreement at the earliest possible stage.”
Later, thanks to the Liaison Committee we were able – on two occasions last year – to follow up on progress in this area.
On the first occasion, in reply to the Liaison Committee’s letter of last January which asked “What discussions have the Government had with international partners about the definition of an autonomous weapons system, and what representations have they received about the issues presented with their current definition?”
the government replied:
“There is no international agreement on the definition or characteristics of autonomous weapons systems. HMG has received some representations on this subject from Parliamentarians ……” and has discussed it during meetings of the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), an international forum which brings together expertise from states, industry, academia and civil society.
The GGE is yet to achieve consensus on an internationally accepted definition and there is therefore no common standard against which to align. As such, the UK does not intend to change its definition.”
So no change there my lords until later in the year……. December 2020 when the Prime Minister announced the creation of the Autonomy Development Centre to “accelerate the research, development, testing, integration and deployment of world-leading artificial intelligence and autonomous systems”
In the follow up Report “AI in the UK: No Room for Complacency” published in the same month, we concluded: “We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.”
The response to this last month was a complete about turn by the Government. They said:
“We agree that the UK must be able to participate in international debates on autonomous weapons, taking an active role as moral and ethical leader on the global stage, and we further agree the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies.
“In recent years the MOD has subscribed to a number of definitions of autonomous systems, principally to distinguish them from unmanned or automated systems, and not specifically as the foundation for an ethical framework. On this aspect, we are aligned with our key allies.
Most recently, the UK accepted NATO’s latest definitions of “autonomous” and “autonomy”, which are now in working use within the Alliance. The Committee should note that these definitions refer to broad categories of autonomous systems, and not specifically to LAWS. To assist the Committee, we have provided a table setting out UK and some international definitions of key terms.”
The NATO definition sets a much less high bar as to what is considered autonomous: “A system that decides and acts to accomplish desired goals within defined parameters, based on acquired knowledge and an evolving situational awareness, following an optimal but potentially unpredictable course of action.”
The Government went on to say: “The MOD is preparing to publish a new Defence AI Strategy and will continue to review definitions as part of ongoing policy development in this area.”
Now, I apologize for taking my Noble Lords at length through this exchange of recommendation and response but if nothing else it does demonstrate the terrier-like quality of Lords Select Committees in getting responses from government.
This latest response is extremely welcome. But in the context of the Lord Brown’s amendment and the issues we have raised we need to ask a number of questions now: What are the consequences of the MOD’s fresh thinking?
What is the Defence AI Strategy designed to achieve. Does it include the kind of enquiry our Select Committee was asking for?
Now that we subscribe to the common NATO definition of LAWS will the Strategy in fact deal specifically with the liability and international and domestic legal and ethical framework issues which are central to this amendment?
If not my Lords, then a review of the type envisaged by this amendment is essential.
The final report of the US National Security Commission on Artificial Intelligence referred to by the Noble Lord Browne has for example taken a comprehensive approach to the issues involved. The Noble Lord has quoted three very important conclusions and asked whether the government agrees in respect of our own autonomous weapons.
There are three further crucial recommendations made by the Commission:
“The United States must work closely with its allies to develop standards of practice regarding how states should responsibly develop, test, and employ AI-enabled and autonomous weapon systems.”
And “The United States should actively pursue the development of technologies and strategies that could enable effective and secure verification of future arms control agreements involving uses of AI technologies.”
And of particular importance in this context “countries must take actions which focus on reducing risks associated with AI enabled and autonomous weapon systems and encourage safety and compliance with IHL (international Humanitarian Law) when discussing their development, deployment, and use”.
Will the Defence AI Strategy or indeed the Integrated Review undertake as wide an enquiry? Would it come to the same or similar conclusions?
My Lords, the MOD it seems has moved some way to getting to grips with the implications of autonomous weapons in the last three years. If it has not yet considered the issues set out in the amendment, it clearly should, as soon as possible, update the legal frameworks for warfare in the light of new technology, or our service personnel will be at considerable legal risk. I hope it will move further in response to today’s short debate.
6th April 2021
Regulating the Internet
22nd December 2014
I’m supporting the We love Vauxhall Bus Station Campaign!
20th September 2014