Debate on AI in the UK: No Room For Complacency report
Recently the House of Lords belatedly debated the follow Report to the the original House of Lords AI Committee Report AI Report No Room for Complacency . This is how I introduced it:
My Lords, the Liaison Committee report No Room for Complacency was published in December 2020, as a follow-up to our AI Select Committee report, AI in the UK: Ready, Willing and Able?, published in April 2018. Throughout both inquiries and right up until today, the pace of development here and abroad in AI technology, and the discussion of AI governance and regulation, has been extremely fast moving. Today, just as then, I know that I am attempting to hit a moving target. Just take, for instance, the announcement a couple of weeks ago about the new Gato—the multipurpose AI which can do 604 functions —or perhaps less optimistically, the Clearview fine. Both have relevance to what we have to say today.
First, however, I say a big thank you to the then Liaison Committee for the new procedure which allowed our follow-up report and to the current Lord Speaker, Lord McFall, in particular and those members of our original committee who took part. I give special thanks to the Liaison Committee team of Philippa Tudor, Michael Collon, Lucy Molloy and Heather Fuller, and to Luke Hussey and Hannah Murdoch from our original committee team who more than helped bring the band, and our messages, back together.
So what were the main conclusions of our follow-up report? What was the government response, and where are we now? I shall tackle this under five main headings. The first is trust and understanding. The adoption of AI has made huge strides since we started our first report, but the trust issue still looms large. Nearly all our witnesses in the follow-up inquiry said that engagement continued to be essential across business and society in particular to ensure that there is greater understanding of how data is used in AI and that government must lead the way. We said that the development of data trusts must speed up. They were the brainchild of the Hall-Pesenti report back in 2017 as a mechanism for giving assurance about the use and sharing of personal data, but we now needed to focus on developing the legal and ethical frameworks. The Government acknowledged that the AI Council’s roadmap took the same view and pointed to the ODI work and the national data strategy. However, there has been too little recent progress on data trusts. The ODI has done some good work, together with the Ada Lovelace Institute, but this needs taking forward as a matter of urgency, particularly guidance on the legal structures. If anything, the proposals in Data: A New Direction, presaging a new data reform Bill in the autumn, which propose watering down data protection, are a backward step.
More needs to be done generally on digital understanding. The digital literacy strategy needs to be much broader than digital media, and a strong digital competition framework has yet to be put in place. Public trust has not been helped by confusion and poor communication about the use of data during the pandemic, and initiatives such as the Government’s single identifier project, together with automated decision-making and live facial recognition, are a real cause for concern that we are approaching an all-seeing state.
My second heading is ethics and regulation. One of the main areas of focus of our committee throughout has been the need to develop an appropriate ethical framework for the development and application of AI, and we were early advocates for international agreement on the principles to be adopted. Back in 2018, the committee took the view that blanket regulation would be inappropriate, and we recommended an approach to identify gaps in the regulatory framework where existing regulation might not be adequate. We also placed emphasis on the importance of regulators having the necessary expertise.
In our follow-up report, we took the view that it was now high time to move on to agreement on the mechanisms on how to instil what are now commonly accepted ethical principles—I pay tribute to the right reverend Prelate for coming up with the idea in the first place—and to establish national standards for AI development and AI use and application. We referred to the work that was being undertaken by the EU and the Council of Europe, with their risk-based approaches, and also made recommendations focused on development of expertise and better understanding of risk of AI systems by regulators. We highlighted an important advisory role for the Centre for Data Ethics and Innovation and urged that it be placed on a statutory footing.
We welcomed the formation of the Digital Regulation Cooperation Forum. It is clear that all the regulators involved—I apologise for the initials in advance—the ICO, CMA, Ofcom and the FCA, have made great strides in building a centre of excellence in AI and algorithm audit and making this public. However, despite the publication of the National AI Strategy and its commitment to trustworthy AI, we still await the Government’s proposals on AI governance in the forthcoming White Paper.
It seems that the debate within government about whether to have a horizontal or vertical sectoral framework for regulation still continues. However, it seems clear to me, particularly for accountability and transparency, that some horizontality across government, business and society is needed to embed the OECD principles. At the very least, we need to be mindful that the extraterritoriality of the EU AI Act means a level of regulatory conformity will be required and that there is a strong need for standards of impact, as well as risk assessment, audit and monitoring, to be enshrined in regulation to ensure, as techUK urges, that we consider the entire AI lifecycle.
We need to consider particularly what regulation is appropriate for those applications which are genuinely high risk and high impact. I hope that, through the recently created AI standards hub, the Alan Turing Institute will take this forward at pace. All this has been emphasised by the debate on the deployment of live facial recognition technology, the use of biometrics in policing and schools, and the use of AI in criminal justice, recently examined by our own Justice and Home Affairs Committee.
My third heading is government co-ordination and strategy. Throughout our reports we have stressed the need for co-ordination between a very wide range of bodies, including the Office for Artificial Intelligence, the AI Council, the CDEI and the Alan Turing Institute. On our follow-up inquiry, we still believed that more should be done to ensure that this was effective, so we recommended a Cabinet committee which would commission and approve a five-year national AI strategy, as did the AI road map.
In response, the Government did not agree to create a committee but they did commit to the publication of a cross-government national AI strategy. I pay tribute to the Office for AI, in particular its outgoing director Sana Khareghani, for its work on this. The objectives of the strategy are absolutely spot on, and I look forward to seeing the national AI strategy action plan, which it seems will show how cross-government engagement is fostered. However, the Committee on Standards in Public Life—I am delighted that the noble Lord, Lord Evans, will speak today—report on AI and public standards made the deficiencies in common standards in the public sector clear.
Subsequently, we now have an ethics, transparency and accountability framework for automated decision-making in the public sector, and more recently the CDDO-CDEI public sector algorithmic transparency standard, but there appears to be no central and local government compliance mechanism and little transparency in the form of a public register, and the Home Office appears to be still a law unto itself. We have AI procurement guidelines based on the World Economic Forum model but nothing relevant to them in the Procurement Bill, which is being debated as we speak. I believe we still need a government mechanism for co-ordination and compliance at the highest level.
The fourth heading is impact on jobs and skills. Opinions differ over the potential impact of AI but, whatever the chosen prognosis, we said there was little evidence that the Government had taken a really strategic view about this issue and the pressing need for digital upskilling and reskilling. Although the Government agreed that this was critical and cited a number of initiatives, I am not convinced that the pace, scale and ambition of government action really matches the challenge facing many people working in the UK.
The Skills and Post-16 Education Act, with its introduction of a lifelong loan entitlement, is a step in the right direction and I welcome the renewed emphasis on further education and the new institutes of technology. The Government refer to AI apprenticeships, but apprentice levy reform is long overdue. The work of local digital skills partnerships and digital boot camps is welcome, but they are greatly underresourced and only a patchwork. The recent Youth Unemployment Select Committee report Skills for Every Young Person noted the severe lack of digital skills and the need to embed digital education in the curriculum, as did the AI road map. Alongside this, we shared the priority of the AI Council road map for more diversity and inclusion in the AI workforce and wanted to see more progress.
At the less rarefied end, although there are many useful initiatives on foot, not least from techUK and Global Tech Advocates, it is imperative that the Government move much more swiftly and strategically. The All-Party Parliamentary Group on Diversity and Inclusion in STEM recommended in a recent report a STEM diversity decade of action. As mentioned earlier, broader digital literacy is crucial too. We need to learn how to live and work alongside AI.
The fifth heading is the UK as a world leader. It was clear to us that the UK needs to remain attractive to international research talent, and we welcomed the Global Partnership on AI initiative. The Government in response cited the new fast-track visa, but there are still strong concerns about the availability of research visas for entrance to university research programmes. The failure to agree and lack of access to EU Horizon research funding could have a huge impact on our ability to punch our weight internationally.
How the national AI strategy is delivered in terms of increased R&D and innovation funding will be highly significant. Of course, who knows what ARIA may deliver? In my view, key weaknesses remain in the commercialisation and translation of AI R&D. The recent debate on the Science and Technology Committee’s report on catapults reminded us that this aspect is still a work in progress.
Recent Cambridge round tables have confirmed to me that we have a strong R&D base and a growing number of potentially successful spin-outs from universities, with the help of their dedicated investment funds, but when it comes to broader venture capital culture and investment in the later rounds of funding, we are not yet on a par with Silicon Valley in terms of risk appetite. For AI investment, we should now consider something akin to the dedicated film tax credit which has been so successful to date.
Finally, we had, and have, the vexed question of lethal autonomous weapons, which we raised in the original Select Committee report and in the follow-up, particularly in the light of the announcement at the time of the creation of the autonomy development centre in the MoD. Professor Stuart Russell, who has long campaigned on this subject, cogently raised the limitation of these weapons in his second Reith Lecture. In both our reports we said that one of the big disappointments was the lack of definition of “autonomous weapons”. That position subsequently changed, and we were told in the Government’s response to the follow-up report that NATO had agreed a definition of “autonomous” and “automated”, but there is still no comprehensive definition of lethal autonomous weapons, despite evidence that they have clearly already been deployed in theatres such as Libya, and the UK has firmly set its face against laws limitation in international fora such as the CCW.
For a short report, our follow-up report covered a great deal of ground, which I have tried to cover at some speed today. AI lies at the intersection of computer science, moral philosophy, industrial education and regulatory policy, which makes how we approach the risks and opportunities inherent in this technology vital and difficult. The Government are engaged in a great deal of activity. The question, as ever, is whether it is focused enough and whether the objectives, such as achieving trustworthy AI and digital upskilling, are going to be achieved through the actions taken so far. The evidence of success is clearly mixed. Certainly there is still no room for complacency. I very much look forward to hearing the debate today and to what the Minister has to say in response. I beg to move.
Government should use procurement process to secure good work
Recently in the context of its duties under the Procurement Bill I argued for an obligation on Government to ensure that it had regard to the need to secure good work for those carrying out contracts under its procurement activities. This is what I said:
My own interests, and indeed concerns, in this area go back to the House of Lords Select Committee on AI. I chaired this ad hoc inquiry, which produced two reports: AI in the UK: Ready, Willing and Able? and a follow-up report via the Liaison Committee, AI in the UK: No Room for Complacency, which I mentioned in the debate on a previous group.
The issue of the adoption of AI and its relationship to the augmentation of human employment or substitution is key. We were very mindful of the Frey and Osborne predictions in 2013, which estimated that 47% of US jobs are at risk of automation—since watered down—relating to the sheer potential scale of automation over the next few years through the adoption of new technology. The IPPR in 2017 was equally pessimistic. Others, such as the OECD, have been more optimistic about the job-creation potential of these new technologies, but it is notable that the former chief economist of the Bank of England, Andrew Haldane, entered the prediction game not long ago with a rather pessimistic outlook.
Whatever the actual outcome, we said in our report that we need to prepare for major disruption in the workplace. We emphasised that public procurement has a major role in terms of the consequences of AI adoption on jobs and that risk and impact assessments need to be embedded in the tender process.
The noble Lord, Lord Knight, mentioned the All-Party Parliamentary Group on the Future of Work, which, alongside the Institute for the Future of Work, has produced some valuable reports and recommendations in the whole area of the impact of new technology on the workplace. In their reports—the APPG’s The New Frontier and the institute’s Mind the Gap—they recommend that public authorities be obliged to conduct algorithmic impact assessments as a systematic approach to and framework for accountability and as a regulatory tool to enhance the accountability and transparency of algorithmic systems. I tried to introduce in the last Session a Private Member’s Bill that would have obliged public authorities to complete an algorithmic impact assessment where they procure or develop an automated
decision-making system, based on the Canadian directive on artificial intelligence’s impact assessments and the 2022 US Algorithmic Accountability Act.
In particular, we need to consider the consequences for work and working people, as well as the impact of AI on the quality of employment. We also need to ensure that people have the opportunity to reskill and retrain so that they can adapt to the evolving labour market caused by AI. The all-party group said:
“The principles of Good Work should be recognised as fundamental values … to guide development and application of a human-centred AI Strategy. This will ensure that the AI Strategy works to serve the public interest in vision and practice, and that its remit extends to consider the automation of work.”
The Institute for the Future of Work’s Good Work Charter is a useful checklist of AI impacts for risk and impact assessments—for instance, in a workplace context, issues relating to
“access … fair pay … fair conditions … equality … dignity … autonomy … wellbeing … support”
and participation. The noble Lord, Lord Knight, and the noble Baroness, Lady Bennett, have said that these amendments would ensure that impacts on the creation of good, local jobs and other impacts in terms of access to, terms of and quality of work are taken into account in the course of undertaking public procurement.
As the Work Foundation put it in a recent report,
“In many senses, insecure work has become an accepted part of the UK’s labour market over the last 20 years. Successive governments have prioritised raising employment and lowering unemployment, while paying far less attention to the quality and security of the jobs available.”
The Taylor review of modern working practices, Good Work—an independent report commissioned by the Department for Business, Energy and Industrial Strategy that remains largely unimplemented—concluded that there is a need to provide a framework that better reflects the realities of the modern economy and the spectrum of work carried out.
The Government have failed to legislate to ensure that we do not move even further down the track towards a preponderantly gig economy. It is crucial that they use their procurement muscle to ensure, as in Good Work, that these measures are taken on every major public procurement involving AI and automated decision-making.
The Queens Speech 2022: Questioning the Government's Digital Agenda
Shortly after the Queens Speech this year which set out the Government's extensive legislative programme in the field of digital regulation I took part in the Lords debate which responded to the Speech.
My Lords, I shall focus mainly on the Government’s digital proposals. As my noble friend Lady Bonham-Carter, the noble Baroness, Lady Merron, and many other noble Lords have made clear, the media Bill and Channel 4 privatisation will face fierce opposition all around this House. It could not be clearer that the policy towards both Channel 4 and the BBC follows some kind of red wall-driven, anti-woke government agenda that has zero logic. The Up Next White Paper on PSB talks of
“embedding the importance of distinctively British content directly into the existing quota system.”
How does the Minister define “distinctively British content”? Is it whatever the Secretary of State believes it is? As for the Government’s response to the consultation on audience protection standards on VOD services, can the Minister confirm that Ofcom will have the power to assess whether a platform’s own-brand age ratings genuinely take account of the values and expectations of UK families, as the BBFC’s do?
But there are key issues that will need dealing with in the Bill’s passage through Parliament. As we have heard from many noble Lords, the “legal but harmful” provisions are potentially dangerous to freedom of expression, with those harms not being defined in the Bill itself. Similarly, with the lack of definition of children’s harms, it needs to be clear that encouraging self-harm or eating disorders is explicitly addressed on the face of the Bill, as my honourable friend Jamie Stone emphasised on Second Reading. My honourable friend Munira Wilson raised whether the metaverse was covered. Noble Lords may have watched the recent Channel 4 “Dispatches” exposing harms in the metaverse and chat rooms in particular. Without including it in the primary legislation, how can we be sure about this? In addition, the category definitions should be based more on risk than on reach, which would take account of cross-platform activity.
One of the great gaps not filled by the Bill, or the recent Elections Act just passed, is the whole area of misinformation and disinformation which gives rise to threats to our democracy. The Capitol riots of 6 January last year were a wake-up call, along with the danger of Donald Trump returning to Twitter.
The major question is why the draft digital markets, competition and consumer Bill is only a draft Bill in this Session. The DCMS Minister Chris Philp himself said in a letter to the noble Baroness, Lady Stowell—the Chair of the Communications and Digital Committee—dated just this 6 May, that
“urgent action in digital markets is needed to address the dominance of a small number of very powerful tech firms.”
In evidence to the BEIS Select Committee, the former chair of the CMA, the noble Lord, Lord Tyrie, recently stressed the importance of new powers to ensure expeditious execution and to impose interim measures.
Given the concerns shared widely within business about the potential impact on data adequacy with the EU, the idea of getting a Brexit dividend from major amendments to data protection through a data reform Bill is laughable. Maybe some clarification and simplification are needed—but not the wholesale changes canvassed in the Data: A New Direction consultation. Apart from digital ID standards, this is a far lower business priority than reforming competition regulation. A report by the New Economics Foundation made what it said was a “conservative estimate” that if the UK were to lose its adequacy status, it would increase business costs by at least £1.6 billion over the next 10 years. As the report’s author said, that is just the increased compliance costs and does not include estimates of the wider impacts around trade shifting, with UK businesses starting to lose EU customers. In particular, as regards issues relating to automated decision-making, citizens and consumers need more protection, not less.
As regards the Product Security and Telecommunications Infrastructure Bill, we see yet more changes to the Electronic Communications Code, all the result of the Government taking a piecemeal approach to broadband rollout. I do, however, welcome the provisions on security standards for connectable tech products.
Added to a massive programme of Bills, the DCMS has a number of other important issues to resolve: the AI governance White Paper; gambling reform, as mentioned by my noble friend Lord Foster; and much-needed input into IP and performers’ rights reform and protection where design and AI are concerned. I hope the Minister is up for a very long and strenuous haul. Have the Government not clearly bitten off more than the DCMS can chew?
Camera Code of practice: motion to Regret
I recently moved a regret motion that "This House regrets the Surveillance Camera Code of Practice because (1) it does not constitute a legitimate legal or ethical framework for the police’s use of facial recognition technology, and (2) it is incompatible with human rights requirements surrounding such technology." The government continues to resist putting in place a proper legislative framework for collection and use of biometric data and deployment of live facial recognition technology despite the Bridges v South Wales Police case , the conclusions of the Ada Lovelace Institute’s Ryder review and its Countermeasures report and the efforts of many campaigning organisations such as Big Brother Watch and Liberty
My Lords, I have raised the subject of live facial recognition many times in this House and elsewhere, most recently last November, in connection with its deployment in schools. Following an incredibly brief consultation exercise, timed to coincide with the height of the summer holidays last year, the Government laid an updated Surveillance Camera Code of Practice, pursuant to the Protection of Freedoms Act 2012, before both Houses on 16 November last year, which came into effect on 12 January 2022.
The subject matter of this code is of great importance. The last Surveillance Camera Commissioner did a survey shortly before stepping down, and found that there are over 6,000 systems and 80,000 cameras in operation across 183 local authorities. The UK is now the most camera-surveilled country in the western world. According to recently published statistics, London remains the third most surveilled city in the world, with 73 surveillance cameras for every 1,000 people. We are also faced with a rising tide of the use of live facial recognition for surveillance purposes.
Let me briefly give a snapshot of the key arguments why this code is insufficient as a legitimate legal or ethical framework for the police’s use of facial recognition technology and is incompatible with human rights requirements surrounding such technology. The Home Office has explained that changes were made mainly to reflect developments since the code was first published, including changes introduced by legislation such as the Data Protection Act 2018 and those necessitated by the successful appeal of Councillor Ed Bridges in the Court of Appeal judgment on police use of live facial recognition issued in August 2020, which ruled that that South Wales Police’s use of AFR—automated facial recognition—had not in fact been in accordance with the law on several grounds, including in relation to certain convention rights, data protection legislation and the public sector equality duty.
During the fifth day in Committee on the Police, Crime, Sentencing and Courts Bill last November, the noble Baroness, Lady Williams of Trafford, the Minister, described those who know about the Bridges case as “geeks”. I am afraid that does not minimise its importance to those who want to see proper regulation of live facial recognition. In particular, the Court of Appeal held in Bridges that South Wales Police’s use of facial recognition constituted an unlawful breach of Article 8—the right to privacy—as it was not in accordance with law. Crucially, the Court of Appeal demanded that certain bare minimum safeguards were required for the question of lawfulness to even be considered.
The previous surveillance code of practice failed to provide such a basis. This, the updated version, still fails to meet the necessary standards, as the code allows wide discretion to individual police forces to develop their own policies in respect of facial recognition deployments, including the categories of people included on a watch-list and the criteria used to determine when to deploy. There are but four passing references to facial recognition in the code itself. This scant guidance cannot be considered a suitable regulatory framework for the use of facial recognition.
There is, in fact, no reference to facial recognition in the Protection of Freedoms Act 2012 itself or indeed in any other UK statute. There has been no proper democratic scrutiny over the code and there remains no explicit basis for the use of live facial recognition by police forces in the UK. The forthcoming College of Policing guidance will not satisfy that test either.
There are numerous other threats to human rights that the use of facial recognition technology poses. To the extent that it involves indiscriminately scanning, mapping and checking the identity of every person within the camera’s range—using their deeply sensitive biometric data—LFR is an enormous interference with the right to privacy under Article 8 of the ECHR. A “false match” occurs where someone is stopped following a facial recognition match but is not, in fact, the person included on the watch-list. In the event of a false match, a person attempting to go about their everyday life is subject to an invasive stop and may be required to show identification, account for themselves and even be searched under other police powers. These privacy concerns cannot be addressed by simply requiring the police to delete images captured of passers-by or by improving the accuracy of the technology.
The ECHR requires that any interference with the Article 10 right to freedom of expression or the Article 11 right to free association is in accordance with law and both necessary and proportionate. The use of facial recognition technology can be highly intimidating. If we know our faces are being scanned by police and that we are being monitored when using public spaces, we are more likely to change our behaviour and be influenced on where we go and who we choose to associate with.
Article 14 of the ECHR ensures that no one is denied their rights because of their gender, age, race, religion or beliefs, sexual orientation, disability or any other characteristic. Police use of facial recognition gives rise to two distinct discrimination issues: bias inherent in the technology itself and the use of the technology in a discriminatory way.
Liberty has raised concerns regarding the racial and socioeconomic dimensions of police trial deployments thus far—for example, at Notting Hill Carnival for two years running as well as twice in the London Borough of Newham. The disproportionate use of this technology in communities against which it “underperforms” —according to its proponent’s standards—is deeply concerning.
As regards inherent bias, a range of studies have shown facial recognition technology disproportionately misidentifies women and BAME people, meaning that people from these groups are more likely to be wrongly stopped and questioned by police and to have their images retained as the result of a false match.
The Court of Appeal determined that South Wales Police had failed to meet its public sector equality duty, which requires public bodies and others carrying out public functions to have due regard to the need to eliminate discrimination. The revised code not only fails to provide any practical guidance on the public sector equality duty but, given the inherent bias within facial recognition technology, it also fails to emphasise the rigorous analysis and testing required by the public sector equality duty.
The code itself does not cover anybody other than police and local authorities, in particular Transport for London, central government and private users where there have also been concerning developments in terms of their use of police data. For example, it was revealed that the Trafford Centre in Manchester scanned the faces of every visitor for a six-month period in 2018, using watch-lists provided by Greater Manchester Police—approximately 15 million people. LFR was also used at the privately owned but publicly accessible site around King’s Cross station. Both the Met and British Transport Police had provided images for their use, despite originally denying doing so.
It is clear from the current and potential future human rights impact of facial recognition that this technology has no place on our streets. In a recent opinion, the former Information Commissioner took the view that South Wales Police had not ensured that a fair balance had been struck between the strict necessity of the processing of sensitive data and the rights of individuals.
The breadth of public concern around this issue is growing clearer by the day. Several major cities in the US have banned the use of facial recognition and the European Parliament has called for a ban on police use of facial recognition technology in public places and predictive policing. In response to the Black Lives Matter uprisings in 2020, Microsoft, IBM and Amazon announced that they would cease selling facial recognition technology to US law enforcement bodies. Facebook, aka Meta, also recently announced that it will be shutting down its facial recognition system and deleting the “face prints” of more than a billion people after concerns were raised about the technology.
In summary, it is clear that the Surveillance Camera Code of Practice is an entirely unsuitable framework to address the serious rights risk posed by the use of live facial recognition in public spaces in the UK. As I said in November in the debate on facial recognition technology in schools, the expansion of such tools is a
“short cut to a widespread surveillance state.”—[Official Report, 4/11/21; col. 1404.]
Public trust is crucial. As the Biometrics and Surveillance Camera Commissioner said in a recent blog:
“What we talk about in the end, is how people will need to be able to have trust and confidence in the whole ecosystem of biometrics and surveillance”.
I have on previous occasions, not least through a Private Member’s Bill, called for a moratorium on the use of LFR. In July 2019, the House of Commons Science and Technology Committee published a report entitled The Work of the Biometrics Commissioner and the Forensic Science Regulator. It repeated a call made in an earlier 2018 report that
“automatic facial recognition should not be deployed until concerns over the technology’s effectiveness and potential bias have been fully resolved.”
The much-respected Ada Lovelace Institute has also called for a
“a voluntary moratorium by all those selling and using facial recognition technology”,
which would
“enable a more informed conversation with the public about limitations and appropriate safeguards.”
Rather than update toothless codes of practice to legitimise the use of new technologies like live facial recognition, the UK should have a root and branch surveillance camera review which seeks to increase accountability and protect fundamental rights. The review should investigate the novel rights impacts of these technologies, the scale of surveillance we live under and the regulations and interventions needed to uphold our rights.
We were reminded by the leader of the Opposition on Monday about what Margaret Thatcher said, and I also said this to the Minister earlier this week:
“The first duty of Government is to uphold the law. If it tries to bob and weave and duck around that duty when it’s inconvenient, if Government does that, then so will the governed and then nothing is safe—not home, not liberty, not life itself.”
It is as apposite for this debate as it was for that debate on the immigration data exemption. Is not the Home Office bobbing and weaving and ducking precisely as described by the late Lady Thatcher?
My Lords, I thank the Minister for her comprehensive reply. This has been a short but very focused debate and full of extraordinary experience from around the House. I am extremely grateful to noble Lords for coming and contributing to this debate in the expert way they have.
Some phrases rest in the mind. The noble Lord, Lord Alton, talked about live facial recognition being the tactic of authoritarian regimes, and there are several unanswered questions about Hikvision in particular that he has raised. The noble Lord, Lord Anderson, talked about the police needing democratic licence to operate, which was also the thrust of what the noble Lord, Lord Rosser, has been raising. It was also very telling that the noble Lord, Lord Anderson, said the IPA code was much more comprehensive than this code. That is somewhat extraordinary, given the subject matter of the IPA code. The mantra of not stifling innovation seems to cut across every form of government regulation at the moment. The fact is that, quite often, certainty in regulation can actually boost innovation— I think that is completely lost on this Government.
The noble Baroness, Lady Falkner, talked about human rights being in a parlous state, and I appreciated her remarks—both in a personal capacity and as chair of the Equality and Human Rights Commission—about the public sector equality duty and what is required, and the fact that human rights need to be embedded in the regulation of live facial recognition.
Of course, not all speakers would go as far as I would in asking for a moratorium while we have a review. However, all speakers would go as far as I go in requiring a review. I thought the adumbration by the noble Lord, Lord Rosser, of the elements of a review of that kind was extremely useful.
The Minister spent some time extolling the technology —its accuracy and freedom from bias and so on—but in a sense that is a secondary issue. Of course it is important, but the underpinning of this by a proper legal framework is crucial. Telling us all to wait until we see the College of Policing guidance does not really seem satisfactory. The aspect underlying everything we have all said is that this is piecemeal—it is a patchwork of legislation. You take a little bit from equalities legislation, a little bit from the Data Protection Act, a little bit to come—we know not what—from the College of Policing guidance. None of that is satisfactory. Do we all just have to wait around until the next round of judicial review and the next case against the police demonstrate that the current framework is not adequate?
Of course I will not put this to a vote. This debate was to put down a marker—another marker. The Government cannot be in any doubt at all that there is considerable anxiety and concern about the use of this technology, but this seems to be the modus operandi of the Home Office: do the minimum as required by a court case, argue that it is entirely compliant when it is not and keep blundering on. This is obviously light relief for the Minister compared with the police Bill and the Nationality and Borders Bill, so I will not torture her any further. However, I hope she takes this back to the Home Office and that we come up with a much more satisfactory framework than we have currently.
Live Facial Recognition: Home Office in Denial
I recently asked a question about the new College of Policing guidance on Live Facial Recognition and received this answer from Baroness Williams the Home Office Minister.
So its carry on surveilling.
To ask Her Majesty’s Government what assessment they have made of the new College of Policing guidance on live facial recognition.
The Minister of State, Home Office (Baroness Williams of Trafford) (Con)
My Lords, facial recognition is an important public safety tool that helps the police to identify and eliminate suspects more quickly and accurately. The Government welcome the College of Policing’s national guidance, which responds to a recommendation in the Bridges v South Wales Police judgment.
Lord Clement-Jones
My Lords, despite committing to a lawful, ethical approach, the guidance gives carte blanche to the use of live and retrospective facial recognition, potentially allowing innocent victims and witnesses to be swept on to police watch-lists. This is without any legislation or parliamentary or other oversight, such as that recently recommended by the Justice and Home Affairs Committee, chaired by my noble friend Lady Hamwee. Are we not now sleep-walking into a surveillance society, and is it not now time for a moratorium on this technology, pending a review?
Where should facial recognition be used?
14 February 2022
Interview with
Gareth Mitchell, BBC & Stephanie Hare, Author & Lord Clement-Jones
When we think of our personal data, we often consider information like our phone number, bank details, or email address. But what about our eyes, ears, mouth, and nose? Facial recognition is increasingly being used to tag and track our individual activities, and while commonplace in unlocking personal devices like laptops and phones, certain institutions are keen to use our features for much more than mugshots. This includes the US Treasury, who last week backtracked on plans for mandatory facial verification for people logging their tax returns. So why are some people wary of firms having their faces on file? Robert Spencer finds out more...
Robert - It's a question that appears time and time again. How comfortable are we as a society with facial recognition? As unlocking your phone shows, in some respects the answer is clear, but when it comes to having your face scanned as you walk down the street, the issue becomes more murky.
Gareth - It's a biometric identifier. That means using aspects of your body for identification. The issue is that all of us are walking around in public showing our faces, meaning that anybody with a scanner, if they want to can mount a camera, and use an algorithm to identify us. We don't have any control over who is using our face as the identifier.
Robert - That's Gareth Mitchell who presents Digital Planet on the BBC world service. This lack of control and consent is key to one of the central paradoxes in the discussion around facial recognition. It speaks to the differences in technologies involved as Stephanie Hare explains in her new book, Technology Is Not Neutral: A Short Guide to Technology Ethics.
Stephanie - There are different types of facial recognition technology. So let's start with facial verification. That's the kind that you would use to unlock your own smartphone. That's not a very high risk use of facial recognition technology because the biometric never leaves your phone. A higher risk example is going to be when the police are using live facial recognition technology to identify people in a crowd. This might be high risk because it can have a chilling effect on free speech. If people fear that when they're going to these protests, they're being scanned by the police.
Robert - But it's not just about giving consent and having control of your biometrics. The algorithms themselves are large complex computer programs, often hidden behind company secrets. And it turns out, they aren't always as accurate as we'd like.
Stephanie - It doesn't work as well on people with darker skin. It works particularly poorly on women with darker skin, but it can also be a problem with children, with trans people and with elderly people.
Robert - The fix though might not be as simple as it seems.
Gareth - In order for the algorithms to get better at recognizing a whole diversity of faces, that would mean training those algorithms on more and more faces. And so opponents would say, well, that just adds to the problem. One problem is the algorithms are not very good at identifying a particular group of people. So let's just go and get loads of profiles of these kinds of people and put them into our databases. Well, then you scanned even more faces you've potentially compromised more people's privacy and that's made the problem even worse.
Robert - Police forces around the UK also disagree on the use of the technology known as live facial recognition. The Met uses facial recognition to find offenders on watchlists, but Scottish police have halted its use.
Stephanie - Right now, our experience of this technology who's using it and how it's even discussed in law differs depending on your postcode.
Gareth - And another reason why facial ID has been so controversial is that some of these police forces have been rolling it out before there was a regulatory framework in effect to protect us and, if necessary, them.
Robert - This lack of legal framework also concerns Lord Clement-Jones who debated the issued last week in the house of Lords.
Lord Clement-Jones - And the general conclusion was that there was no single piece of legislation that really covered the use of live facial recognition. It's very easy to say, we need to ban this technology and I'm not quite in that camp. What I want to see, and this was the common ground, is a review. We want to see what basis there should be for legislation, we want to see how the technology performs, and then we want to be able to decide whether we should ban it or, whether there are some uses to which it could be put with the right framework.
Robert - It's hard to ignore the distinct advantages facial recognition carries. It's fast and hands free. The ability to accurately and instantly identify a fugitive in a crowd would make the world a safer place.
Gareth - There was bound to be a trade off between our liberties and our security. We should be having conversations that are diverse, where a wide range of people are coming to the table with their views and their issues.
Stephanie - I would want to be hearing from scientists, the people who manufactured this tech, from the military, from the police, from medical professionals, from civil liberties groups. And I think it's the first step on a long journey that we have to have in the United Kingdom.
Robert - Lord Clement-Jones is optimistic.
Lord Clement-Jones - The public ought to take away from this debate, that there are a great many parliamentarians concerned about the use of new technology without proper oversight. But they should put pressure on their own MPs, to say, well, what is happening much more seriously.
Robert - It's clear then that we need to have this discussion sooner rather than later. In the meantime, though, I'm going to keep using my face to unlock my phone. I'm not sure where the line in the sand is, but for me, it's a bit past this level of convenience.
New Surveillance Code Incompatible with Human Rights
Recently the Government Introduced a revised Surveillance Camera Code of Practice which it claims make the police's use of live facial recognition compliant with the Bridges Case. This is my my speech on the regret motion I tabled in response with very helpful support from Liberty.
That this House regrets the Surveillance Camera Code of Practice because (1) it does not constitute a legitimate legal or ethical framework for the police’s use of facial recognition technology, and (2) it is incompatible with human rights requirements surrounding such technology.
My Lords, I have raised the subject of live facial recognition many times in this House and elsewhere, most recently last November, in connection with its deployment in schools. Following an incredibly brief consultation exercise, timed to coincide with the height of the summer holidays last year, the Government laid an updated Surveillance Camera Code of Practice, pursuant to the Protection of Freedoms Act 2012, before both Houses on 16 November last year, which came into effect on 12 January 2022.
The subject matter of this code is of great importance. The last Surveillance Camera Commissioner did a survey shortly before stepping down, and found that there are over 6,000 systems and 80,000 cameras in operation across 183 local authorities. The UK is now the most camera-surveilled country in the western world. According to recently published statistics, London remains the third most surveilled city in the world, with 73 surveillance cameras for every 1,000 people. We are also faced with a rising tide of the use of live facial recognition for surveillance purposes.
Let me briefly give a snapshot of the key arguments why this code is insufficient as a legitimate legal or ethical framework for the police’s use of facial recognition technology and is incompatible with human rights requirements surrounding such technology. The Home Office has explained that changes were made mainly to reflect developments since the code was first published, including changes introduced by legislation such as the Data Protection Act 2018 and those necessitated by the successful appeal of Councillor Ed Bridges in the Court of Appeal judgment on police use of live facial recognition issued in August 2020, which ruled that that South Wales Police’s use of AFR—automated facial recognition—had not in fact been in accordance with the law on several grounds, including in relation to certain convention rights, data protection legislation and the public sector equality duty.
During the fifth day in Committee on the Police, Crime, Sentencing and Courts Bill last November, the noble Baroness, Lady Williams of Trafford, the Minister, described those who know about the Bridges case as “geeks”. I am afraid that does not minimise its importance to those who want to see proper regulation of live facial recognition. In particular, the Court of Appeal held in Bridges that South Wales Police’s use of facial recognition constituted an unlawful breach of Article 8—the right to privacy—as it was not in accordance with law. Crucially, the Court of Appeal demanded that certain bare minimum safeguards were required for the question of lawfulness to even be considered.
The previous surveillance code of practice failed to provide such a basis. This, the updated version, still fails to meet the necessary standards, as the code allows wide discretion to individual police forces to develop their own policies in respect of facial recognition deployments, including the categories of people included on a watch-list and the criteria used to determine when to deploy. There are but four passing references to facial recognition in the code itself. This scant guidance cannot be considered a suitable regulatory framework for the use of facial recognition.
There is, in fact, no reference to facial recognition in the Protection of Freedoms Act 2012 itself or indeed in any other UK statute. There has been no proper democratic scrutiny over the code and there remains no explicit basis for the use of live facial recognition by police forces in the UK. The forthcoming College of Policing guidance will not satisfy that test either.
There are numerous other threats to human rights that the use of facial recognition technology poses. To the extent that it involves indiscriminately scanning, mapping and checking the identity of every person within the camera’s range—using their deeply sensitive biometric data—LFR is an enormous interference with the right to privacy under Article 8 of the ECHR. A “false match” occurs where someone is stopped following a facial recognition match but is not, in fact, the person included on the watch-list. In the event of a false match, a person attempting to go about their everyday life is subject to an invasive stop and may be required to show identification, account for themselves and even be searched under other police powers. These privacy concerns cannot be addressed by simply requiring the police to delete images captured of passers-by or by improving the accuracy of the technology.
The ECHR requires that any interference with the Article 10 right to freedom of expression or the Article 11 right to free association is in accordance with law and both necessary and proportionate. The use of facial recognition technology can be highly intimidating. If we know our faces are being scanned by police and that we are being monitored when using public spaces, we are more likely to change our behaviour and be influenced on where we go and who we choose to associate with.
Article 14 of the ECHR ensures that no one is denied their rights because of their gender, age, race, religion or beliefs, sexual orientation, disability or any other characteristic. Police use of facial recognition gives rise to two distinct discrimination issues: bias inherent in the technology itself and the use of the technology in a discriminatory way.
Liberty has raised concerns regarding the racial and socioeconomic dimensions of police trial deployments thus far—for example, at Notting Hill Carnival for two years running as well as twice in the London Borough of Newham. The disproportionate use of this technology in communities against which it “underperforms” —according to its proponent’s standards—is deeply concerning.
As regards inherent bias, a range of studies have shown facial recognition technology disproportionately misidentifies women and BAME people, meaning that people from these groups are more likely to be wrongly stopped and questioned by police and to have their images retained as the result of a false match.
The Court of Appeal determined that South Wales Police had failed to meet its public sector equality duty, which requires public bodies and others carrying out public functions to have due regard to the need to eliminate discrimination. The revised code not only fails to provide any practical guidance on the public sector equality duty but, given the inherent bias within facial recognition technology, it also fails to emphasise the rigorous analysis and testing required by the public sector equality duty.
The code itself does not cover anybody other than police and local authorities, in particular Transport for London, central government and private users where there have also been concerning developments in terms of their use of police data. For example, it was revealed that the Trafford Centre in Manchester scanned the faces of every visitor for a six-month period in 2018, using watch-lists provided by Greater Manchester Police—approximately 15 million people. LFR was also used at the privately owned but publicly accessible site around King’s Cross station. Both the Met and British Transport Police had provided images for their use, despite originally denying doing so.
It is clear from the current and potential future human rights impact of facial recognition that this technology has no place on our streets. In a recent opinion, the former Information Commissioner took the view that South Wales Police had not ensured that a fair balance had been struck between the strict necessity of the processing of sensitive data and the rights of individuals.
The breadth of public concern around this issue is growing clearer by the day. Several major cities in the US have banned the use of facial recognition and the European Parliament has called for a ban on police use of facial recognition technology in public places and predictive policing. In response to the Black Lives Matter uprisings in 2020, Microsoft, IBM and Amazon announced that they would cease selling facial recognition technology to US law enforcement bodies. Facebook, aka Meta, also recently announced that it will be shutting down its facial recognition system and deleting the “face prints” of more than a billion people after concerns were raised about the technology.
In summary, it is clear that the Surveillance Camera Code of Practice is an entirely unsuitable framework to address the serious rights risk posed by the use of live facial recognition in public spaces in the UK. As I said in November in the debate on facial recognition technology in schools, the expansion of such tools is a “short cut to a widespread surveillance state.”—[Official Report, 4/11/21; col. 1404.]
Public trust is crucial. As the Biometrics and Surveillance Camera Commissioner said in a recent blog: “What we talk about in the end, is how people will need to be able to have trust and confidence in the whole ecosystem of biometrics and surveillance”.
I have on previous occasions, not least through a Private Member’s Bill, called for a moratorium on the use of LFR. In July 2019, the House of Commons Science and Technology Committee published a report entitled The Work of the Biometrics Commissioner and the Forensic Science Regulator. It repeated a call made in an earlier 2018 report that “automatic facial recognition should not be deployed until concerns over the technology’s effectiveness and potential bias have been fully resolved.” The much-respected Ada Lovelace Institute has also called for a “a voluntary moratorium by all those selling and using facial recognition technology”, which would “enable a more informed conversation with the public about limitations and appropriate safeguards.”
Rather than update toothless codes of practice to legitimise the use of new technologies like live facial recognition, the UK should have a root and branch surveillance camera review which seeks to increase accountability and protect fundamental rights. The review should investigate the novel rights impacts of these technologies, the scale of surveillance we live under and the regulations and interventions needed to uphold our rights.
We were reminded by the leader of the Opposition on Monday about what Margaret Thatcher said, and I also said this to the Minister earlier this week:
“The first duty of Government is to uphold the law. If it tries to bob and weave and duck around that duty when it’s inconvenient, if Government does that, then so will the governed and then nothing is safe—not home, not liberty, not life itself.”
It is as apposite for this debate as it was for that debate on the immigration data exemption. Is not the Home Office bobbing and weaving and ducking precisely as described by the late Lady Thatcher?
Artificial Intelligence and Intellectual Property: incentivize human innovation and creation
Christian Gordon-Pullar and I recently responded to the Government's Consultation Paper on Artificial Intelligence and Intellectual Property: Copyright and Patents;
This is what we said;
As Artificial Intelligence (AI) becomes embedded in people’s lives, the United Kingdom (UK) is at a pivotal inflection point. The UK’s National AI Strategy rightly recognises Artificial Intelligence (AI) as the ‘fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life’ and estimates that AI could deliver a 10% increase in UK GDP in 2030.
The UK is, potentially, well-positioned to be a world-leader in AI, over time, as a genuine research and innovation powerhouse, a hub for global talent and a progressive regulatory and business environment. Achieving this will involve attracting, retaining and incentivising business to create, protect and locate investment efforts in the UK. The UK has the potential to gain impetus from a position of strength in AI research, enterprise and ethical regulation, and, with its recent history of support for AI, it stands among the best in the world. To attract talent, incentivise investment in AI-powered or AI-focused innovation, influence global markets and shape global governance, the nature of the Intellectual Property regime in the UK relating to AI will be crucial.
Specifically in relation to the three headline areas of focus in the Consultation Paper:
1. Copyright: Computer Generated Works
The UK is one of only a handful of countries to protect works generated by a computer where there is no human creator. The “author” of a “computer-generated work” (CGW) is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. Protection lasts for 50 years from the date the work is made.
In the same way the owner of the literary work and the copyright subsisting in it, if it were original, would be, alternatively:
- a) the operator of an AI system (aligning its inputs and selecting its datasets and data fields); or
- b) their employer, if employed; or
- c) a third party if the operator has a contract assigning such rights outside of employment context.
To be original, a work must be an author’s or artist’s own intellectual creation, reflecting their personality (see the decisions of the EU Court of Justice in Infopaq, C-5/08, and Painer, C-145/10).
At the other end of the scale, a human who simply provides training datato an AI system and presses “analyse” is unlikely to be considered the author of the resulting work.
In this way we believe that the existing copyright legislative framework under the CDPA adequately addresses the current needs of AI developers. New entrants and disruptors can, in our opinion, work within the existing framework which adequately caters for the existing and foreseeable future.
Indeed realistic hypothetical future scenarios may well involve an AI system having access to content from global providers and creating derivative content (whether under licence or not) and doing so at great speed with little or no investment or “sweat of the brow” and, therefore it can be argued that in fact the level of protection should be reduced to be proportionate to the time effort and investment involved.
Further, we would also urge that copyright law is clarified to ensure that it is the operator (or his /her employer) of the AI system (that is, the person that guides the AI system to apply certain data or parameters and shapes the outcome) that is the copyright owner and not the owner of the AI system.
One can see a future scenario where “AI-as-a-Service” is offered whereby a content user or hirer of the AI system is allowed to apply their own rules, parameters and data/inputs to a problem whilst ‘hiring’ or using the AI system as a service (just as SaaS exists today). The operator of the AI system (not owner of the AI system ) should in that case be the first owner of the copyright in the resulting work (subject to contractual rights that may be transferred, licensed or otherwise assigned thereafter).
Ranking Options in order:
- We would therefore urge the IPO to choose Option 2 – a lesser term of copyright protection should apply e.g. 5-15 Years for AI generated Copyright works e.g. music, art etc. which, as described above, require little investment or “sweat of the brow”
- Failing 1, we would urge the IPO to choose option 0 – Make no legal change.
- Option 1, removing the protection is not a viable or desirable option in our opinion.
2. Copyright: Text and Data Mining
The Government rightly believe that that there is a need to promote and further enable AI development. This must however be balanced with a commensurate and proportionate recognition of the critical importance and value of data as raw material.
AI developers rely on high-quality data to develop reliable and innovative AI-driven inventions and applications. Licensing regimes under existing IP law are designed to cater for the needs of AI developers.
By the same token content and data-driven businesses themselves have seen a rapid increase in the use of AI technology and machine-learning, either for news summaries, data gathering efforts, translations for research and journalistic purposes or to assist organisations to save time by processing large amounts of text and other data at scale and speed. Digital technologies, including AI, are and will continue to be of critical importance to these industries, helping create content, new products and value-added services to deliver to a broad range of corporate and retail clients. Whether in news media or cross-industry research, publishers are themselves investing in AI; continued collaboration with start-ups and academia are creating tailored materials for wide populations of beneficiaries (students, academia, research organisations, and even marketers of consumer publishing products).
It is of paramount importance to balance the needs of future AI development with the legal, commercial and economic rights of data-owners and the need to incentivize new AI adoption with recognition of the rights of existing content owners.
We have however seen no evidence the existing copyright legislative framework fails to adequately address the current needs of AI developers. Moreover it is particularly important, in our view, to ensure that the development of AI is not enabled at the expense of the underlying investment by copyright and data-owners. (see endnote 1).
If the content owners of underlying data materials withhold the licensing of, or access to, such materials or attempt to price them at a level that is unfair, the answer is for Government via the Competition and Markets Authority/the new Digital Markets Unit (or indeed other regulators who form part of the Digital Regulation Cooperation Forum) to put in place competition measures to ensure there is a clear legal recourse in such situations.
In summary we do not believe that current copyright law creates a disparity between the interests of AI developers and investors and content owners. The existing copyright regime under the CDPA reflects a balance that fairly protects those investing in data creation without giving an unfair advantage to technology companies offering AI-enabled content creation services. In particular the current framework provides a balanced regime for data and text mining and we believe no changes are required at present. However, we recommend a watching brief, and that the IPO consider and take account of changes to copyright laws in other countries that may make it more attractive for AI operators to base their operations in those extraterritorial locations so that text and data mining activities, machine learning, etc. become more easily performed elsewhere or permitted with incentives not offered in UK.
Ranking Options in order:
- We would therefore urge the IPO to elect Option 0 – Make no legal change. No other option is currently justifiable given the lack of evidence of an adverse commercial environment preventing access to data or text by AI-enabled content creators. Should the Government or IPO consider that there needs to be increased access to data at lower cost, it should look at other policy levers to stimulate such uptake, such as providing tax incentives for content owners to license content, rather than reducing copyright protection.
- We also concur with industry leads who consider that forcing rightsholders to opt in to protection, as suggested in option 3 would be complicated and costly for many businesses and industries who own literally millions of works, when licensing is far simpler, and would be against the spirit of international treaties on copyright.
3. Patents:
If UK patents were to protect AI-devised inventions, how should the inventor be identified, and who should be the patent owner? What effects does this have on incentivising and rewarding AI-devised inventions?
As we described above the author and first owner of any AI-assisted or created work will be the person who creates the work or their employer if that person is an employee or or a third party if the operator has a contract assigning such rights outside of employment context
As the emphasis in copyright law suggests, creating a ‘work’ is in essence a human activity. This is given additional support by the reference to the automatic transfer of copyright from employee to employer; an AI system cannot be said to be an employee.
Similar principles in our view apply to patents as with copyright. For patentability the applicant inventor must be a ‘person’.
Authoritative guidance on how AI-created inventions fit into this scheme, where no human inventor is mentioned is given in the decision in Thaler v Comptroller General of Patents Trade Marks and Designs (aka ‘Thaler’ or ‘DABUS case’) and in particular in our view in the statements by Lord Justice Birss (L.J. Birss) in his dissenting opinion (See paragraphs numbered 8, 58 78 et seq. of the DABUS case, and the Conclusion).
In summary, L.J. Birss. set out his views on the lower courts’ erroneous interpretations of the law and in conclusion stated:
- The inventor of an invention under the 1977 Act is the person who actually devised the invention.
- Dr Thaler has complied with his obligations under s13(2) of the 1977 Act because he has given a statement identifying the person(s) he believes the inventor to be (s13(2)(a)) and indicating the derivation of his right to be granted the patent (s13(2)(b)).
- It is no part of the Comptroller's functions under the 1977 Act to deem the applications as withdrawn simply because the applicant's statement under s13(2)(a) does not identify any person who is the inventor. Since the statement honestly reflects the applicant's belief, it satisfies s13(2)(a).
- It is no part of the Comptroller's functions under the 1977 Act to in any way be satisfied that the applicant's claim to the right to be granted the patent is good. In granting a patent to an applicant the Comptroller is not ratifying the applicant's claim to derivation. Dr Thaler's asserted claim, if correct, would mean he was entitled to the grant. Therefore the statement satisfies s13(2)(b).
- The fact that the creator of the inventions in this case was a machine is no impediment to patents being granted to this applicant.
All three judges in Thaler agreed that under the Patents Act (PA) 1977 an inventor must be a person, and as a machine is not a person it, therefore, cannot be an "inventor" for the purposes of section 7(2) of the Act. L.J. Birss however dissented on the crucial point whether it was an impediment to the grant of an application that the creator of an invention was a machine, as such. He stated that it was simply that a machine inventor cannot be treated as an inventor for the purpose of granting the application.
In Australia the Court has taken a slightly different view but there, the law is different. As L.J. Birss in Thaler remarked in his judgment:
‘After the hearing the appellant sent the court a copy of the judgment of BeachJ of 30th July 2021 in the Federal Court of Australia Thaler v Commissoner of Patents [2021] FCA 879. The judgment deals with another parallel case about applications for the same inventions. Beach J decided the case in Dr Thaler's favour. However yet again the relevant legislation is quite distinct from that in the UK. The applications reached the Australian Patent Office via the Patent Cooperation Treaty (PCT), which meant that a local rule (reg 3.2C(2)(aa)) applied which requires the applicant to provide the name of the inventor. That rule is in different terms from s13(2) and the present case is not a PCT application ( i.e. in Australia the name of the inventor must be provided unlike under UK legislation). If it were then the operation of s13(2) would be affected by a deeming provision (s89B(1)(c)) which we do not have to consider”.
We believe that in principle LJ Birss is correct and that the patentability of such inventions where created by AI, or with the assistance of AI, provided the basic criteria under the relevant legislation are met, has been established. There is therefore absolutely no need for the patent system to identify AI as the inventor or to create entirely new rights
If the IPO takes the view or on appeal it is established that the law has not been correctly expressed by LJ Birss, it should be clarified to accord with his judgment. Failing that , for instance if AI systems themselves are treated as inventors, in our view, the system of innovation and inventorship in the UK will be eroded, the benefits and incentives for human inventors will be reduced, and ultimately firms could invest more in AI systems than in human innovation.
Without changes in taxes on AI-inventorship and commensurate incentives to balance the negative impact, such a change would be detrimental to the ethos of the patent system and its focus on “a person” being the inventor mentioned in a patent application.
Whilst it is unclear exactly what the future regulation of AI and associated IP rights will look like in the UK at this stage, it is clear however that an internationally harmonised approach to the protection and recognition accorded to AI generated inventions would be desirable.
it is also our view right in principle, to cite L.J. Birss, that ‘there is no rule of law that a new intangible produced by existing tangible property is the property of the owner of the tangible property’, as Dr Thaler contended, and certainly no rule that the property contemplated by section 7(2)(b) in an invention created by a machine is owned by the owner of the machine. Accordingly, the hearing officer and the judge were correct to hold that Dr Thaler is not entitled to apply for patents in respect of the inventions given the premise that DABUS made the inventions’.
In our view, as with AI creations for copyright purposes, the key is the operation and control of the machine/AI producing the invention not ownership of the AI itself.
Ranking Options in order:
- We would therefore urge the IPO to elect Option 1 whereby it is clarified that “Inventor” includes a human responsible for the inventive activity of the AI system that lead to the invention or which devises inventions (e.g. where that humanoperator selects or guides the AI with relevant data, parameters, data-sets or programming logic for the AI’s function or purpose, which leads it to create an invention). This would also cater for the analogous scenario (to that mentioned above under 1, where AI becomes prevalent in the first instance as “AI-as-a-service”, whereupon there should be a presumption of ownership by the AI Operator (not the AI-system owner) and where transfers of ownership and rights can be addressed contractually at the point of use where AI is used ‘…as-a-service’.
- As a second-best option, as requested-particularly if the opinion of LJ Birss is subsequently confirmed by the Supreme Court - we would advocate Option 0 – no change.
Endnotes
- Reference: In Authors Guild v. Google 721 F.3d 132 (2d Cir. 2015), a copyright case heard in the United States District Court for the Southern District of New York, and on appeal to the United States Court of Appeals for the Second Circuit between 2005 and 2015. The case concerned fair use in copyright law and the transformation of printed copyrighted books into an online searchable database through scanning and digitization. The case centered on the legality of the Google Book Search (originally named as Google Print) Library Partner project that had been launched in 2003. Though there was general agreement that Google's attempt to digitise books through scanning and computer-aided recognition for searching online was seen as a transformative step for libraries, many authors and publishers had expressed concern that Google had not sought their permission to make scans of the books still under copyright and offered them to users.
- Two separate lawsuits, including one from three authors represented by the Authors Guild and another by Association of American Publishers, were filed in 2005 charging Google with copyright infringement. Google worked with the litigants in both suits to develop a settlement agreement (the Google Book Search Settlement Agreement) that would have allowed it to continue the program though paying out for works it had previously scanned, creating a revenue program for future books that were part of the search engine, and allowing authors and publishers to opt-out. The settlement received much criticism as it also applied to all books worldwide, included works that may have been out of print but still under copyright, and may have violated antitrust aspects given Google's dominant position within the Internet industry. A reworked proposal to address some of these concerns was met with similar criticism, and ultimately the settlement was rejected by 2011, allowing the two lawsuits to be joined for a combined trial. In late 2013, after the class action status was challenged, the District Court granted summary judgement in favour of Google, dismissing the lawsuit and affirming the Google Books project met all legal requirements for fair use. The Second Circuit Court of Appeal upheld the District Court's summary judgement in October 2015, ruling Google's "project provides a public service without violating intellectual property law."[1] The U.S. Supreme Court subsequently denied a petition to hear the case.[2]
A big thank you to Christian for all his hard work on this response.
Lord C-J : Protect Pure Maths
During the Report Stage of the Advanced Research and Invention Agency Bill I spoke in favour of changes to the bill to ensure that pure maths research was included in the definition of scientific research.
This is the recording
https://twitter.com/i/status/1470883981973463049
And this is what I said:
My Lords, I have signed and I support Amendments 12, 13 and 14. As someone immersed in issues relating to AI, machine learning and the application of algorithms to decision-making over the years, I, too, support Protect Pure Maths in its campaign to protect pure maths and advance the mathematical sciences in the UK—and these amendments, tabled by the noble and gallant Lord, Lord Craig, reflect that.
The campaign points out that pure maths has been a great British success story, with Alan Turing, Andrew Wiles and Roger Penrose, the Nobel Prize winner—and, of course, more recently Hannah Fry has popularised mathematics. Stephen Hawking was a great exemplar, too. However, despite its value to society, maths does not always receive the funding and support that it warrants. Giving new funding to AI, for instance, risks overlooking the fundamental importance of maths to technology.
As Protect Pure Maths says, the 2004 BEIS guidelines on research and development, updated in 2010, currently limit the definition of science and research and development for tax purposes to the systematic study of the nature and behaviour of the physical and material universe. We should ensure that the ARIA Bill does not make the same mistake, and that the focus and capacity of the Bill’s provisions also explicitly include the mathematical sciences, including pure maths. Maths needs to be explicitly included as a part of scientific knowledge and research, and I very much hope that the Government accept these amendments.
Lord C-J helps to launch Rolls-Royce Aletheia Framework version 2
The Aletheia Framework is a practical one-page toolkit that guides developers, executives and boards both prior to deploying an AI, and during its use. A second version has been dwveloped by Caroline Gorski and her team at R2 Data Labs, Rolls Royce to be applicable accross a wise range of secotors.
This is how they describe it:
"It asks them to consider 32 facets of social impact, governance and trust and transparency and to provide evidence which can then be used to engage with approvers, stakeholders or auditors.
A new module added in December 2021 is a tried and tested way to identify and help mitigate the risk of bias in training data and AIs. This complements the existing five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias in service or malfunction and allow human intervention to control and correct it."
I commented on the original version of The Aletheia Framework, and it deals with many of the same areas in education as it does for Rolls-Royce in manufacturing – ethics, impact, compliance, data protection. So, I saw an equivalence there and the Institute for AI Ethics in Eduction adapted The Aletheia Framework for its needs.
Here are the two videos I made with Rolls Royce to mark the new version:
First on why practical ethics matters right now to build public trust
https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx
Second to describe how we adapted the Aletheia Framework for education
https://www.lordclementjones.org/wp-content/uploads/2021/12/Education-case-study.mp4