The Queens Speech 2022: Questioning the Government's Digital Agenda
Shortly after the Queens Speech this year which set out the Government's extensive legislative programme in the field of digital regulation I took part in the Lords debate which responded to the Speech.
My Lords, I shall focus mainly on the Government’s digital proposals. As my noble friend Lady Bonham-Carter, the noble Baroness, Lady Merron, and many other noble Lords have made clear, the media Bill and Channel 4 privatisation will face fierce opposition all around this House. It could not be clearer that the policy towards both Channel 4 and the BBC follows some kind of red wall-driven, anti-woke government agenda that has zero logic. The Up Next White Paper on PSB talks of
“embedding the importance of distinctively British content directly into the existing quota system.”
How does the Minister define “distinctively British content”? Is it whatever the Secretary of State believes it is? As for the Government’s response to the consultation on audience protection standards on VOD services, can the Minister confirm that Ofcom will have the power to assess whether a platform’s own-brand age ratings genuinely take account of the values and expectations of UK families, as the BBFC’s do?
But there are key issues that will need dealing with in the Bill’s passage through Parliament. As we have heard from many noble Lords, the “legal but harmful” provisions are potentially dangerous to freedom of expression, with those harms not being defined in the Bill itself. Similarly, with the lack of definition of children’s harms, it needs to be clear that encouraging self-harm or eating disorders is explicitly addressed on the face of the Bill, as my honourable friend Jamie Stone emphasised on Second Reading. My honourable friend Munira Wilson raised whether the metaverse was covered. Noble Lords may have watched the recent Channel 4 “Dispatches” exposing harms in the metaverse and chat rooms in particular. Without including it in the primary legislation, how can we be sure about this? In addition, the category definitions should be based more on risk than on reach, which would take account of cross-platform activity.
One of the great gaps not filled by the Bill, or the recent Elections Act just passed, is the whole area of misinformation and disinformation which gives rise to threats to our democracy. The Capitol riots of 6 January last year were a wake-up call, along with the danger of Donald Trump returning to Twitter.
The major question is why the draft digital markets, competition and consumer Bill is only a draft Bill in this Session. The DCMS Minister Chris Philp himself said in a letter to the noble Baroness, Lady Stowell—the Chair of the Communications and Digital Committee—dated just this 6 May, that
“urgent action in digital markets is needed to address the dominance of a small number of very powerful tech firms.”
In evidence to the BEIS Select Committee, the former chair of the CMA, the noble Lord, Lord Tyrie, recently stressed the importance of new powers to ensure expeditious execution and to impose interim measures.
Given the concerns shared widely within business about the potential impact on data adequacy with the EU, the idea of getting a Brexit dividend from major amendments to data protection through a data reform Bill is laughable. Maybe some clarification and simplification are needed—but not the wholesale changes canvassed in the Data: A New Direction consultation. Apart from digital ID standards, this is a far lower business priority than reforming competition regulation. A report by the New Economics Foundation made what it said was a “conservative estimate” that if the UK were to lose its adequacy status, it would increase business costs by at least £1.6 billion over the next 10 years. As the report’s author said, that is just the increased compliance costs and does not include estimates of the wider impacts around trade shifting, with UK businesses starting to lose EU customers. In particular, as regards issues relating to automated decision-making, citizens and consumers need more protection, not less.
As regards the Product Security and Telecommunications Infrastructure Bill, we see yet more changes to the Electronic Communications Code, all the result of the Government taking a piecemeal approach to broadband rollout. I do, however, welcome the provisions on security standards for connectable tech products.
Added to a massive programme of Bills, the DCMS has a number of other important issues to resolve: the AI governance White Paper; gambling reform, as mentioned by my noble friend Lord Foster; and much-needed input into IP and performers’ rights reform and protection where design and AI are concerned. I hope the Minister is up for a very long and strenuous haul. Have the Government not clearly bitten off more than the DCMS can chew?
Camera Code of practice: motion to Regret
I recently moved a regret motion that "This House regrets the Surveillance Camera Code of Practice because (1) it does not constitute a legitimate legal or ethical framework for the police’s use of facial recognition technology, and (2) it is incompatible with human rights requirements surrounding such technology." The government continues to resist putting in place a proper legislative framework for collection and use of biometric data and deployment of live facial recognition technology despite the Bridges v South Wales Police case , the conclusions of the Ada Lovelace Institute’s Ryder review and its Countermeasures report and the efforts of many campaigning organisations such as Big Brother Watch and Liberty
My Lords, I have raised the subject of live facial recognition many times in this House and elsewhere, most recently last November, in connection with its deployment in schools. Following an incredibly brief consultation exercise, timed to coincide with the height of the summer holidays last year, the Government laid an updated Surveillance Camera Code of Practice, pursuant to the Protection of Freedoms Act 2012, before both Houses on 16 November last year, which came into effect on 12 January 2022.
The subject matter of this code is of great importance. The last Surveillance Camera Commissioner did a survey shortly before stepping down, and found that there are over 6,000 systems and 80,000 cameras in operation across 183 local authorities. The UK is now the most camera-surveilled country in the western world. According to recently published statistics, London remains the third most surveilled city in the world, with 73 surveillance cameras for every 1,000 people. We are also faced with a rising tide of the use of live facial recognition for surveillance purposes.
Let me briefly give a snapshot of the key arguments why this code is insufficient as a legitimate legal or ethical framework for the police’s use of facial recognition technology and is incompatible with human rights requirements surrounding such technology. The Home Office has explained that changes were made mainly to reflect developments since the code was first published, including changes introduced by legislation such as the Data Protection Act 2018 and those necessitated by the successful appeal of Councillor Ed Bridges in the Court of Appeal judgment on police use of live facial recognition issued in August 2020, which ruled that that South Wales Police’s use of AFR—automated facial recognition—had not in fact been in accordance with the law on several grounds, including in relation to certain convention rights, data protection legislation and the public sector equality duty.
During the fifth day in Committee on the Police, Crime, Sentencing and Courts Bill last November, the noble Baroness, Lady Williams of Trafford, the Minister, described those who know about the Bridges case as “geeks”. I am afraid that does not minimise its importance to those who want to see proper regulation of live facial recognition. In particular, the Court of Appeal held in Bridges that South Wales Police’s use of facial recognition constituted an unlawful breach of Article 8—the right to privacy—as it was not in accordance with law. Crucially, the Court of Appeal demanded that certain bare minimum safeguards were required for the question of lawfulness to even be considered.
The previous surveillance code of practice failed to provide such a basis. This, the updated version, still fails to meet the necessary standards, as the code allows wide discretion to individual police forces to develop their own policies in respect of facial recognition deployments, including the categories of people included on a watch-list and the criteria used to determine when to deploy. There are but four passing references to facial recognition in the code itself. This scant guidance cannot be considered a suitable regulatory framework for the use of facial recognition.
There is, in fact, no reference to facial recognition in the Protection of Freedoms Act 2012 itself or indeed in any other UK statute. There has been no proper democratic scrutiny over the code and there remains no explicit basis for the use of live facial recognition by police forces in the UK. The forthcoming College of Policing guidance will not satisfy that test either.
There are numerous other threats to human rights that the use of facial recognition technology poses. To the extent that it involves indiscriminately scanning, mapping and checking the identity of every person within the camera’s range—using their deeply sensitive biometric data—LFR is an enormous interference with the right to privacy under Article 8 of the ECHR. A “false match” occurs where someone is stopped following a facial recognition match but is not, in fact, the person included on the watch-list. In the event of a false match, a person attempting to go about their everyday life is subject to an invasive stop and may be required to show identification, account for themselves and even be searched under other police powers. These privacy concerns cannot be addressed by simply requiring the police to delete images captured of passers-by or by improving the accuracy of the technology.
The ECHR requires that any interference with the Article 10 right to freedom of expression or the Article 11 right to free association is in accordance with law and both necessary and proportionate. The use of facial recognition technology can be highly intimidating. If we know our faces are being scanned by police and that we are being monitored when using public spaces, we are more likely to change our behaviour and be influenced on where we go and who we choose to associate with.
Article 14 of the ECHR ensures that no one is denied their rights because of their gender, age, race, religion or beliefs, sexual orientation, disability or any other characteristic. Police use of facial recognition gives rise to two distinct discrimination issues: bias inherent in the technology itself and the use of the technology in a discriminatory way.
Liberty has raised concerns regarding the racial and socioeconomic dimensions of police trial deployments thus far—for example, at Notting Hill Carnival for two years running as well as twice in the London Borough of Newham. The disproportionate use of this technology in communities against which it “underperforms” —according to its proponent’s standards—is deeply concerning.
As regards inherent bias, a range of studies have shown facial recognition technology disproportionately misidentifies women and BAME people, meaning that people from these groups are more likely to be wrongly stopped and questioned by police and to have their images retained as the result of a false match.
The Court of Appeal determined that South Wales Police had failed to meet its public sector equality duty, which requires public bodies and others carrying out public functions to have due regard to the need to eliminate discrimination. The revised code not only fails to provide any practical guidance on the public sector equality duty but, given the inherent bias within facial recognition technology, it also fails to emphasise the rigorous analysis and testing required by the public sector equality duty.
The code itself does not cover anybody other than police and local authorities, in particular Transport for London, central government and private users where there have also been concerning developments in terms of their use of police data. For example, it was revealed that the Trafford Centre in Manchester scanned the faces of every visitor for a six-month period in 2018, using watch-lists provided by Greater Manchester Police—approximately 15 million people. LFR was also used at the privately owned but publicly accessible site around King’s Cross station. Both the Met and British Transport Police had provided images for their use, despite originally denying doing so.
It is clear from the current and potential future human rights impact of facial recognition that this technology has no place on our streets. In a recent opinion, the former Information Commissioner took the view that South Wales Police had not ensured that a fair balance had been struck between the strict necessity of the processing of sensitive data and the rights of individuals.
The breadth of public concern around this issue is growing clearer by the day. Several major cities in the US have banned the use of facial recognition and the European Parliament has called for a ban on police use of facial recognition technology in public places and predictive policing. In response to the Black Lives Matter uprisings in 2020, Microsoft, IBM and Amazon announced that they would cease selling facial recognition technology to US law enforcement bodies. Facebook, aka Meta, also recently announced that it will be shutting down its facial recognition system and deleting the “face prints” of more than a billion people after concerns were raised about the technology.
In summary, it is clear that the Surveillance Camera Code of Practice is an entirely unsuitable framework to address the serious rights risk posed by the use of live facial recognition in public spaces in the UK. As I said in November in the debate on facial recognition technology in schools, the expansion of such tools is a
“short cut to a widespread surveillance state.”—[Official Report, 4/11/21; col. 1404.]
Public trust is crucial. As the Biometrics and Surveillance Camera Commissioner said in a recent blog:
“What we talk about in the end, is how people will need to be able to have trust and confidence in the whole ecosystem of biometrics and surveillance”.
I have on previous occasions, not least through a Private Member’s Bill, called for a moratorium on the use of LFR. In July 2019, the House of Commons Science and Technology Committee published a report entitled The Work of the Biometrics Commissioner and the Forensic Science Regulator. It repeated a call made in an earlier 2018 report that
“automatic facial recognition should not be deployed until concerns over the technology’s effectiveness and potential bias have been fully resolved.”
The much-respected Ada Lovelace Institute has also called for a
“a voluntary moratorium by all those selling and using facial recognition technology”,
which would
“enable a more informed conversation with the public about limitations and appropriate safeguards.”
Rather than update toothless codes of practice to legitimise the use of new technologies like live facial recognition, the UK should have a root and branch surveillance camera review which seeks to increase accountability and protect fundamental rights. The review should investigate the novel rights impacts of these technologies, the scale of surveillance we live under and the regulations and interventions needed to uphold our rights.
We were reminded by the leader of the Opposition on Monday about what Margaret Thatcher said, and I also said this to the Minister earlier this week:
“The first duty of Government is to uphold the law. If it tries to bob and weave and duck around that duty when it’s inconvenient, if Government does that, then so will the governed and then nothing is safe—not home, not liberty, not life itself.”
It is as apposite for this debate as it was for that debate on the immigration data exemption. Is not the Home Office bobbing and weaving and ducking precisely as described by the late Lady Thatcher?
My Lords, I thank the Minister for her comprehensive reply. This has been a short but very focused debate and full of extraordinary experience from around the House. I am extremely grateful to noble Lords for coming and contributing to this debate in the expert way they have.
Some phrases rest in the mind. The noble Lord, Lord Alton, talked about live facial recognition being the tactic of authoritarian regimes, and there are several unanswered questions about Hikvision in particular that he has raised. The noble Lord, Lord Anderson, talked about the police needing democratic licence to operate, which was also the thrust of what the noble Lord, Lord Rosser, has been raising. It was also very telling that the noble Lord, Lord Anderson, said the IPA code was much more comprehensive than this code. That is somewhat extraordinary, given the subject matter of the IPA code. The mantra of not stifling innovation seems to cut across every form of government regulation at the moment. The fact is that, quite often, certainty in regulation can actually boost innovation— I think that is completely lost on this Government.
The noble Baroness, Lady Falkner, talked about human rights being in a parlous state, and I appreciated her remarks—both in a personal capacity and as chair of the Equality and Human Rights Commission—about the public sector equality duty and what is required, and the fact that human rights need to be embedded in the regulation of live facial recognition.
Of course, not all speakers would go as far as I would in asking for a moratorium while we have a review. However, all speakers would go as far as I go in requiring a review. I thought the adumbration by the noble Lord, Lord Rosser, of the elements of a review of that kind was extremely useful.
The Minister spent some time extolling the technology —its accuracy and freedom from bias and so on—but in a sense that is a secondary issue. Of course it is important, but the underpinning of this by a proper legal framework is crucial. Telling us all to wait until we see the College of Policing guidance does not really seem satisfactory. The aspect underlying everything we have all said is that this is piecemeal—it is a patchwork of legislation. You take a little bit from equalities legislation, a little bit from the Data Protection Act, a little bit to come—we know not what—from the College of Policing guidance. None of that is satisfactory. Do we all just have to wait around until the next round of judicial review and the next case against the police demonstrate that the current framework is not adequate?
Of course I will not put this to a vote. This debate was to put down a marker—another marker. The Government cannot be in any doubt at all that there is considerable anxiety and concern about the use of this technology, but this seems to be the modus operandi of the Home Office: do the minimum as required by a court case, argue that it is entirely compliant when it is not and keep blundering on. This is obviously light relief for the Minister compared with the police Bill and the Nationality and Borders Bill, so I will not torture her any further. However, I hope she takes this back to the Home Office and that we come up with a much more satisfactory framework than we have currently.
Live Facial Recognition: Home Office in Denial
I recently asked a question about the new College of Policing guidance on Live Facial Recognition and received this answer from Baroness Williams the Home Office Minister.
So its carry on surveilling.
To ask Her Majesty’s Government what assessment they have made of the new College of Policing guidance on live facial recognition.
The Minister of State, Home Office (Baroness Williams of Trafford) (Con)
My Lords, facial recognition is an important public safety tool that helps the police to identify and eliminate suspects more quickly and accurately. The Government welcome the College of Policing’s national guidance, which responds to a recommendation in the Bridges v South Wales Police judgment.
Lord Clement-Jones
My Lords, despite committing to a lawful, ethical approach, the guidance gives carte blanche to the use of live and retrospective facial recognition, potentially allowing innocent victims and witnesses to be swept on to police watch-lists. This is without any legislation or parliamentary or other oversight, such as that recently recommended by the Justice and Home Affairs Committee, chaired by my noble friend Lady Hamwee. Are we not now sleep-walking into a surveillance society, and is it not now time for a moratorium on this technology, pending a review?
Design Rights Still the "Poor Cousins": Better Design Protection Needed
Christian Gordon-Pullar-to whom huge thanks- and I put forward evidence recently to the IPO on what we thought the future of Design Rights should be. We said the following in our introduction.
-
Context:
The existing framework encompassing intellectual property laws that protect designs is overly complex. In the UK, depending on how one characterizes the context, four to six different types of overlapping design protection are available (see below). This creates barriers in effectively protecting creations for designers. Further, the overlapping elements of copyright and design protection in the UK are confusing to many designers and even to some legal practitioners. Designs have traditionally been protected by design law, copyright law, or both, depending of course on whether the respective requirements for protection are met.
For this introductory overview, in context, the Government’s Innovation strategy states that :
‘[D]esign is core to successful innovation’ and that ‘[G]reat design means putting the needs, wishes and behaviours of people at the heart of the innovation process, so that new ideas are truly desirable as well as being technically feasible and financially viable’.
If that is the case, then designers need to be afforded better tools, simpler processes and greater clarity in the law as it applies to designers and the protection of their creative rights. The IPO recognises that SMEs need the right support commercialising their IP and addressing these gaps and complexities in the laws impacting Designs is a key element for SMEs in the design industry.
In 2018, the Design Council reported that the design economy generated £85.2bn in gross value added (‘GVA’) to the UK in 2016, equivalent to 7% of total UK GVA. The Call for Views on Designs states that this ‘demonstrates the importance of designs to the United Kingdom both now and in the future’ Yet, the industry is struggling with the costs and complexities of UK laws on designs, especially post Brexit. Further, adoption of new technologies that might aid or simplify some of the cost and time challenges that the UK’s design industry faces with the current UK laws on designs, post Brexit, is not keeping pace.
Entrepreneurs and designers in SME businesses in the textile, industrial and non-industrial design sectors are perhaps some of the most seriously impacted by the cost of adoption and the additional cost of upskilling staff to address post-Brexit inconsistencies in the law(s) impacting designs and the costs of adoption of new technology, such as 3-D printing, 4-D designs or AI-generated designs.
Perhaps because of its complexity, design protection is often misunderstood by designers and so less used, compared other IP rights. In our conversations with industry groups, a consistent sentiment was expressed that designers are the ‘poor cousins’ of their counterparts in the music, fashion or software industries. Complexity is also, per ACID feedback, being used by some lawyers to exclude the small designer without the funds to instruct lawyers, often an unaffordable expense for SMEs.
This submission advocates the need for a substantial overhaul of the overlapping rights and a resetting of the law relating to design protection in the UK to assist designers in making the most of the UK Designs laws and related IPO processes. This includes maximising the benefits of the rights granted by the law on designs whilst promoting a more efficient, cheaper, simpler and more useable registration system, with the adoption of new AI and machine learning search tools that do not extend the existing lengthy registration process and avoid raising applicant costs any further.
-
Scope and Ambit
Any new regime also needs to recognise not only the overlapping challenges of the existing legislative framework but also the challenges of new technologies and their impact, including 3-D printing and 3-D/4-D designs and the future of the 4-D printing pioneers, technologies that are now increasingly available and prevalent in US, the EU and UK and to seek comparable benchmarks – in the US, EU and Asia – to ensure that the UK Designs laws remain competitive, encourage design registrations in the UK and offer clear, cost-effective solutions for the industry’s challenges. The breadth of the consultation suggests that the Government shows willingness and clear intent to assist business in creating a national designs regime that works better for designers. We would therefore encourage the IPO address these concerns with a significant overhaul of the system, helping businesses to leverage the strategic and commercial value of their designs.
-
Overview / Summary of Feedback
The headline points of note, in our view are as follows:
- Brexit-related consequences. EU IP post Brexit left the design industry with a problem of registering in EU and UK. Prior to Brexit, it was clearer/easier.
- Higher Legal costs. The industry has lost some confidence in IP attorneys and IP attorneys are not as familiar with registered designs and unregistered design practices or the finer points of application of copyright and design laws, especially post-Brexit;
- The sector has lost confidence in the design registration system, offering no real protection against larger companies who pay for legal advice and continue to infringe.
- Damages. If awarded these are low and do not compensate for costs; statutory guidance on damages or minimum damages provisions would help the industry.
- Novelty searches. Currently, the UK Patent Office does not search for conflicting designs when it receives an application. This area needs attention and redress. Whilst some expressed concerns about the cost of new searches, delays in application procedures and costs of training to use such new tools, there is a significant opportunity to use AI and machine learning in the Registered Design process and to simplify, making registered design searches easier and helping countless businesses through the current quagmire.
- Criminal sanctions. Further criminal sanctions / provisions should exist in UK law beyond those in the existing legislative framework, e.g. s.5 of the RDA. It is worth recalling that the majority of designers rely on unregistered design protection. Therefore, criminal sanctions should exist for unregistered design infringement. We would advocate for a change in the law, making it a criminal offence to infringe unregistered design rights. This reflects the contention that there are more attractive protections elsewhere[i];
- Harmonisation. Globally - there is a lack of registered design law uniformity. Dubai is an example of issues experienced by ACID. The UK should consider EU laws and laws in comparable jurisdictions and frame a harmonizing law for designs. One element to include would be that disclosure (required) should be anywhere, whether in the EU, the UK, or elsewhere. The place where the design is first disclosed has become critical in determining whether the design is protected as either a supplementary unregistered design in the UK or an unregistered community design in the EU.
- Term for Registered and Unregistered Designs: Extension. We would advocate for a change in the duration of the protection of unregistered design rights. 10 years (from disclosure) is too short and we would recommend to extend to 25 years (given the call for extension of IP protection for AI-generated IP might extend such protection to 25 years)
- Economic Impact. Key members of ACID (Anti-Copying in Design) have indicated[6] that their members and organisations have ‘scaled back on global exports post Brexit as a result of above’.
- Registered design for 3-D/4-D printing. This remains an open question. Whilst 3-D printing is now prevalent[7], 4-D printing is emerging as a new area of industrial and design application that will require new legislative consideration. 4-D printing may forever change the design landscape. 3-D printing or additive manufacturing has already allowed designers to create three-dimensional objects from two-dimensional digital files with obvious implications for counterfeiting, in industrial design sectors, particularly but also in the fashion industry.
In addition to the copyright protection afforded to works of artistic craftsmanship in designs, four specific ‘design rights’ available in the UK prior to 31 December 2020 (IP completion day) are set out below.
- Registered design rights:
- UK registered designs
- Registered Community Designs (RCDs) (in force across the entire EU)
- Unregistered design rights:
-
- UK unregistered design right(also known as ‘design right’)
- Unregistered Community Designs (UCDs) (in force across the entire EU)
-
Those four different types of overlapping design protection available today in the UK are:
- registered designs,
- supplementary unregistered design,
- continuing unregistered community design (if made public in UK or EU before 1 Jan 2021); and,
- UK unregistered design right (“design right”).
In addition, as mentioned, copyright will also subsist in works of artistic craftsmanship.
Further, there are also two further EU rights – a) Community Unregistered Design Right and b) Community Registered Design Right. Whilst they are not available in the UK, they are available to UK designers if they first publish their designs within the EU.
-
Recommendations: Framework and Harmonisation
We encourage the IPO and UK Government to take steps to restore designers’ confidence in the UK Design registration system by simplifying and clarifying the rights into 3 categories:
- Registered designs (including for 3-D and 4-D designs)
- Unregistered Designs.
- Harmonising all elements into a single simpler category of Unregistered designs (removing overlaps with copyright or clarifying that protection for such rights shall subsist in copyright)
- Currently shape and configuration (how the parts are arranged) of 3-dimensional objects is automatically protected in the UK for whichever ends first of:
- 10 years after it was first sold; or
- 15 years after it was created
- Copyright.
- Clarifying which elements applicable to designs can be protected by copyright, including consideration and treatment for 3-D and 4-D designs, adding clarity for applicants and users and removing the current confusion caused by overlapping elements.
- At the international level, Article 2(7) of the Berne Convention on the Protection of Literary and Artistic Works (hereinafter, “Berne Convention”) lets signatory countries decide the extent and mode of application of their copyright laws to works of applied art and industrial designs and models. In the EU, Article 17 of the Design Directive obliges Member States to consider designs protected by design rights also eligible for copyright protection, if such designs comply with the respective protection requirements. The key requirement for copyright protection in the EU is originality, defined as “the author’s own intellectual creation.”
- For this requirement to be fulfilled, the author must have been able to express her creativity in an original manner, Article 17 of the Design Directive also states that “[t]he extent to which, and the conditions under which, such a protection is conferred, including the level of originality required, shall be determined by each Member State.”
- The provision seemed thus to leave to Member States the decision of whether to apply the harmonised criterion of originality – “the author’s own intellectual creation” – to designs, or to instead adopt a different protection requirement for this specific type of work. Following this possibility, a few Member States such as Germany, Portugal and Spain required a higher threshold of originality or artistic merit of the work.
- The CJEU has however clarified that the harmonised criterion for copyright protection applies to designs as well. In other words, the only requirement that a design has to fulfil in order to be protected by copyright in the EU is that it is original in the sense that it is the author’s own intellectual creation.
- Clarifying which elements applicable to designs can be protected by copyright, including consideration and treatment for 3-D and 4-D designs, adding clarity for applicants and users and removing the current confusion caused by overlapping elements.
Lord C-J : Marking International Women's Day and the Need for Free Speech
Recently I took part in the House of Lords Debate to note of International Women’s Day and the United Kingdom’s role in furthering and protecting the equality of women in the UK and internationally. I not only stressed the lack of progress for women's rights in the UK but also the threats to their freedom of expression especially oinline.
My Lords, I am very pleased to take part in today’s debate. Like many others, I am pleased that the noble Baroness, Lady Stedman-Scott, opened it in such a comprehensive way. I certainly do not envy her in responding to it, however, as it has already been so wide-ranging.
I take part today with a somewhat heavy heart, partly because I see the suffering of the women in Ukraine who have to bear such a heavy burden in facing the onslaught of a vicious Russian invasion, whether they stay or flee their homeland. Like the noble Lord, Lord Farmer, I salute the courage of the Russian TV news editor Marina Ovsyannikova, and I of course celebrate the wonderful news of Nazanin Zaghari-Ratcliffe coming home. I also celebrate the growing recognition of the achievements of women in the digital policy space that I speak on frequently.
More broadly, however, contrary to expectations, there has been a deterioration in women’s rights and condition in this country in many ways. I was a teenager in the 1960’s and it seemed then that growing equality of treatment in all walks of life and respect for women’s rights would lead to a better society. In so many areas, I fear that is not the case. As Refuge says in its briefing, women and girls in the UK continue to face appalling levels of violence. More than one in four
women in England and Wales aged 16 to 74 experiences domestic abuse at some point in their lives, and an average of two women a week are killed by their partner or ex-partner—a statistic which has not changed in decades. Women’s Aid highlights the fact that 60% to 70% of women accessing mental health services have experienced domestic abuse.
As Refuge also says, technology is increasingly being weaponised by perpetrators of domestic abuse to harass, stalk and abuse survivors. Technology-facilitated domestic abuse—or tech abuse—has a devastating impact on both mental health and physical safety. The Online Safety Bill, published today, will be judged not only by whether it protects children, but also by whether it protects women from this kind of abuse.
Moreover, rape charges and convictions are at a minuscule level. Home Office crime figures show 56,152 alleged rapes in the year to September 2020, but analysis shows that just 1.5% of reported cases produced a charge. The Victims’ Commissioner for England and Wales, Dame Vera Baird, said:
“If you are raped in Britain today, your chances of seeing justice are slim. Even though police are now referring more and more cases to the CPS, we have seen a catastrophic fall in rape prosecutions. The latest data show just 1.5% of cases result in a charge. That means that more than 98% of cases do not reach court. This is shameful and has real and profound consequences for victims up and down the country.
The drop in prosecutions has led to fewer convictions. There were 1,074 rapists convicted in the year to December 2020, a record low and a decline of 64% from the 2,991 convictions in 2016. In criminal justice, we have had equivocation about the status of misogynistic abuse and conduct as a criminal offence, as we heard so cogently from the noble Baroness, Lady Kennedy.
In healthcare, the foreword by Dame Clare Gerada to Public Policy Projects’ Redressing the Balance: A Women’s Health Agenda is damning. She says:
“It is my personal feeling that women have no more rights regarding their bodies and healthcare than when I was born 62 years ago. As a GP of over 40 years, I have treated thousands of women. However, throughout the process of crafting this report, I have been shocked to learn that many of the medical interventions and procedures held up by institutions and policymakers are not in place for the good of women’s health but serve to prevent women from being in control of their own bodies.”
Single-sex wards are under threat, as we have discussed during the passage of the Health Bill last night.
As a Clapham resident, I was shocked by the policing of the Sarah Everard vigil, the complacency of the outgoing commissioner and the report of the police inspectorate. I am pleased by the outcome of the recent High Court case brought by the organisers, Jessica Leigh, Anna Birley, Henna Shah and Jamie Klingler. All that brings me on to freedom of speech, which is meant to be so dear to us in this country, and in particular to my party. Given the publication today of the Online Safety Bill and the response to the Joint Select Committee on which I sat, it is a highly topical issue.
I have never been trolled or piled on, but when I see those who engage in legitimate discussion of sex-based rights, such as Kathleen Stock, Maya Forstater, JK Rowling, the noble Baroness, Lady Falkner, and others in my party, cancelled, insulted or moderated out, then I despair. Where does this lead? Can we, for
instance, not talk about the concerns raised by the Cass Review, the independent review of gender identity services for children and young people, for fear of giving offence?
There are a few facts from the Cass review which we must talk about. The number of children and young people being referred to GIDS has increased dramatically in just over a decade—from approximately 50 referrals per year in 2009 to 2,500 in 2020, with a waiting list of 4,600. The large majority of these referrals are now for what the report calls “birth-registered females” who are presenting in their early teens. In 2009, girls made up one-third of referrals to GIDS. In 2016, they made up over two-thirds. I have long worked in the autism field, and it is notable that around one-third of children and young people referred to GIDS have autism or other types of neurodiversity. In the population as a whole, the percentage of people with autism is approximately 1%.
If we are to be able to genuinely celebrate women’s achievements and advance their rights, it is vital that we are able to hold a conversation which is civilized and respectful. We must all be allowed to express our genuine beliefs—or, indeed, facts—without fear of censorship or abuse.
Lib Dems back new policy on Democracy and Public Debate
At our recent conference the Party backed a new policy paper on Democracy and Public Debate.
This is what I said in support
Conference, As we saw from the debate last Autumn, policy on public debate and free speech is a difficult one to get right, particularly online. We have to make sure as a party that we find the proper balance between the right not to experience harm online and the right to freedom of expression.
I admit that none of us are going to agree with every single word of a policy document when the digital world is moving so fast but this is a good motion and we need to make decisions.
I spent 6 months of last year on the Joint Scrutiny Committee looking in detail at the Government’s flawed proposals for the regulation of online harms in the draft Online Safety Bill. We heard evidence from a great number of witnesses about the grievous harms such as revenge porn, cyberflashing, trolling, encouragement to suicide and racism, being experienced online, in particular by children, and those with protected characteristics.
We also heard about the potentially dangerous impact of online platforms on our democracy in the way their algorithms and business models target messages- often extreme-using our own behavioural data.
Frances Haugen-the brave former Facebook now Meta employee in particular gave us a vital insight into the threats to our democracy from online mis and dis-information and the platforms’ failure to take adequate action.
The January 6th Riot at the Capitol in Washington was fuelled by Social Media. We now know the reality of Russian interference in Presidential elections and the Brexit Referendum via opaque social media accounts.
People should have the same rights online as they have offline and we must also recognize the unique dangers that online access sometimes poses.
The Online Safety Bill is due to be published this coming week. The Elections Bill is going through Parliament. We have new digital competition law coming down the track. If we pass this motion today it will give us a distinctive Liberal Democrat approach that we can be proud of.
Our Digital Bill of Rights will set out the principles of the approach. In particular : the right to free expression and participation online without being subject to harassment and abuse which should underpin our Party and Society.
We must regulate so we ensure platforms comply with these principles, are audited for their policies and processes in terms of treatment of users and dissemination of illegal content monitored on how they respond to infringement and sanctioned for failure, with the Communications Court as a backstop.
Media and digital literacy too, so strongly emphasized by the Democracy and Public Debate paper, is vital too. We need, as it says, to combat misinformation with critical thinking.
Conference this motion combines principles, regulation and education in the right proportion. Please back it overwhelmingly.
To Save Democracy We Must Tackle Dis- and Misinformation Online
Recently the House of Lords debated the Report "Digital Technology and the Resurrection of Trust" produced by the Democracy and Digital Technologies Select Committee chaired by Lord David Puttnam, now sadly retired from the House of Lords
This is an edited version of the speech I gave winding up the debate
My Lords, this has been an inspiring debate. Events in Ukraine should make us all cherish our democracy in Britain and reinforce our determination to reinforce democratic values across the world. Nothing can compare with the suffering of the Ukrainian people in the defence of their democracy: they are a shining example to us all.
It is regrettable that we are debating this excellent report, which is still highly topical, nearly two years after it was published. I, like all of us who have spoken in this debate, very much miss Lord Puttnam leading the charge on the issues so important to him, and with which his valedictory lecture last October dealt so brilliantly. We also owe a big debt of gratitude to the noble Lord, Lord Lipsey, for stepping in and for his masterful introduction. It is good to see so many members of the committee participating today.
As the noble Lord, Lord Lipsey, says, what seemed controversial then has become commonplace today. Some of the recommendations of the committee are already in the pipeline, but we need to give far more attention to the other recommendations that are not in the pipeline. Given the crossover with many aspects of the report of the Joint Committee on the Draft Online Safety Bill, I am particularly pleased to be taking part in this debate today.
In a piecemfour years ago, US tech journalist Dylan Matthews wrote:
“The internet was supposed to save democracy… How could we have gotten this so wrong?”
He wrote this in the light of allegedmanipulation by Russia both in the US presidential elections and in the Brexit vote, with the aid of Cambridge Analytica, which used data collected online from millions of personal Facebook accounts, targeting individuals with specific misinformation. As the noble Baroness, Lady Morris, said, we were too slow to see the risks. As the noble Lord, Lord Stevenson, said, who doubts this activity now?
In the intervening years, the power of viral disinformation on social media has become even clearer. The long-delayed report on Russian interference, by the Intelligence and Security Committee in July 2020, said:
“The UK is clearly a target for Russia’s disinformation campaigns and political influence operations and must therefore equip itself to counter such efforts.”
We also had the riots at the Capitol in Washington DC on 6 January 2021, mentioned by the noble Lord, Lord Harris. An investigation by ProPublica and the Washington Post found that Facebook groups swelled —with at least 650,000 posts attacking the legitimacy of Joe Biden’s victory—between election day and the 6 January riot, with many calling for executions or other political violence.
We have had former Facebook—now Meta—employee Frances Haugen’s damning testimony, mentioned by the noble Baroness, Lady Kidron, and the noble Lord, Lord Mitchell, to the USA Senate and our own Joint Committee on the Draft Online Safety Bill, on which I sat. She accused the company of putting
“astronomical profits before people.”
Most of us need little convincing that things have gone badly wrong somewhere, and in 2022, after Covid lockdown, the situation seems worse. But as the report of the Democracy and Digital Technologies Committee says, we must look at the roots of the problem and the accountabilities involved. It is all about the power of the algorithm and data, as the noble Lords, Lord Stevenson and Lord Mitchell, said.
We are being targeted with our own data. Online political microtargeting is used to alter how we vote, especially with misinformation. xtreme content is amplified as part of the platform business model. Outrage is encouraged. Their business models operate directly against the best interests of a democratic society. They prey on us, in that vivid phrase quoted by the noble Baroness, Lady Kidron. Lord Puttnam made the strong point in his valedictory lecture that 6 January was a wake-up call to tackling the problems with microtargeting and algorithm bias which underlie the business models of the social media platforms.
Ownership of data is increasingly concentrated in the hands of big internet brands, as we have heard from a number of noble Lords today. Metcalfe’s law of networks has led to enormous and growing power for social media.
What should the consequences be for social media? How can we prevent these harms to democracy? How can we restore trust—or resurrect it, in the words of the report? The bottom line is that we do have the power, as the noble Lords, Lord Holmes and Lord Stevenson, said. We need government regulation, and quickly. In the phrase used by Avaaz, we need to detoxify the algorithm, not only regarding hate speech, terrorism and cyberbullying but in very clear electoral regulation and action by the Competition and Markets Authority to enforce competition in the tech and data space.
We also need much greater personal control over our data and how it is used. Misinformation and disinformation are particularly hard to define, but as the committee said, if the Government decide that the Online Safety Bill is not the appropriate place to do so, then it should use the Elections Bill, which is currently making its way through Parliament. Tackling societal harms caused by misinformation and disinformation is not straightforward, as our Joint Committee found, but the draft online safety Bill, as we described in our report of last December, needs to go further.
There is of course a tension with freedom of expression and as we emphasised, we must prioritise tackling specific harmful activity over restricting content.
In our Joint Committee report, we recommended safety by design requirements, such as increasing transparency and countering algorithmic power and virality; as Fair Vote says, it is a proven way to preserve free speech, while limiting free reach of content that poses societal harm at scale. For example, we heard that a simple change—introducing more friction into sharing on Facebook—would have the same effect on the spread of misinformation and disinformation as the entire third-party fact checking system.
We do not yet know what the Government’s response to these recommendations is—that may come next week—but we do have the Elections Bill in front of us. The real government reluctance is in reform of electoral law and regulation of digital political activity. Apart from the digital imprint provisions, the Bill fails to take any account of the mounting evidence and concerns about the impact on our democracy of misinformation and disinformation. The Government are yet even to adopt the Electoral Commission report of June 2018, Digital Campaigning: Increasing Transparency for Voters, which called for urgent reforms to electoral law to combat misinformation, misuse of personal data and overseas interference in elections amid concerns that British democracy may be under threat. Why are these recommendations not contained in the Elections Bill? We heard in the previous debate today about the flaws in that Bill
How prescient was the ISC in its Russia report:
“The links of the Russian elite to the UK – especially where this involves business and investment – provide access to UK companies and political figures, and thereby a means for broad Russian influence in the UK. To a certain extent, this cannot be untangled and the priority now must be to mitigate the risk and ensure that, where hostile activity is uncovered, the tools exist to tackle it at source.”
Most recently, the Committee on Standards in Public Life has made a number of other important recommendations regarding digital and social media campaigning.
But, as have heard today, this is not enough. Regulation by itself will not deal with all the issues. Even though we are facing issues that threaten democracy, we should be trying to preserve the good that the internet has done as we work to mitigate itsharm to our political system. So, as well as regulation, there needs as be—as the Democracy and Digital Technologies Committee report says—public engagement to support digital understanding at all levels of society. As several noble Lords said, digital literacy and digital skills are of huge importance, as also emphasised by the Committee’s report. We must do more than simply expect Ofcom—even under the chairmanship of the noble Lord, Lord Vaizey—to deliver a digital media strategy. This needs a whole-of-government and whole-of-society approach. We are supposed to be the cradle of democracy, yet the EU is way ahead of us in its proposals to regulate political advertising. This needs cross-governmental action and much greater action from social media platforms themselves.
At the end of the day, however,we need to look in the mirror. We deserve a better system. The Government are playing into the hands of those who wish to erode our democracy by digital means. Why are they intent on reducing the independence of the Electoral Commission? As the noble Lord, Lord Griffiths, said, trust in our democracy has been eroded by this Government—certainly by the negative response so graphically described by the noble Lord, Lord Mitchell. The Government must change tack and provide effective safeguards.
Where should facial recognition be used?
14 February 2022
Interview with
Gareth Mitchell, BBC & Stephanie Hare, Author & Lord Clement-Jones
When we think of our personal data, we often consider information like our phone number, bank details, or email address. But what about our eyes, ears, mouth, and nose? Facial recognition is increasingly being used to tag and track our individual activities, and while commonplace in unlocking personal devices like laptops and phones, certain institutions are keen to use our features for much more than mugshots. This includes the US Treasury, who last week backtracked on plans for mandatory facial verification for people logging their tax returns. So why are some people wary of firms having their faces on file? Robert Spencer finds out more...
Robert - It's a question that appears time and time again. How comfortable are we as a society with facial recognition? As unlocking your phone shows, in some respects the answer is clear, but when it comes to having your face scanned as you walk down the street, the issue becomes more murky.
Gareth - It's a biometric identifier. That means using aspects of your body for identification. The issue is that all of us are walking around in public showing our faces, meaning that anybody with a scanner, if they want to can mount a camera, and use an algorithm to identify us. We don't have any control over who is using our face as the identifier.
Robert - That's Gareth Mitchell who presents Digital Planet on the BBC world service. This lack of control and consent is key to one of the central paradoxes in the discussion around facial recognition. It speaks to the differences in technologies involved as Stephanie Hare explains in her new book, Technology Is Not Neutral: A Short Guide to Technology Ethics.
Stephanie - There are different types of facial recognition technology. So let's start with facial verification. That's the kind that you would use to unlock your own smartphone. That's not a very high risk use of facial recognition technology because the biometric never leaves your phone. A higher risk example is going to be when the police are using live facial recognition technology to identify people in a crowd. This might be high risk because it can have a chilling effect on free speech. If people fear that when they're going to these protests, they're being scanned by the police.
Robert - But it's not just about giving consent and having control of your biometrics. The algorithms themselves are large complex computer programs, often hidden behind company secrets. And it turns out, they aren't always as accurate as we'd like.
Stephanie - It doesn't work as well on people with darker skin. It works particularly poorly on women with darker skin, but it can also be a problem with children, with trans people and with elderly people.
Robert - The fix though might not be as simple as it seems.
Gareth - In order for the algorithms to get better at recognizing a whole diversity of faces, that would mean training those algorithms on more and more faces. And so opponents would say, well, that just adds to the problem. One problem is the algorithms are not very good at identifying a particular group of people. So let's just go and get loads of profiles of these kinds of people and put them into our databases. Well, then you scanned even more faces you've potentially compromised more people's privacy and that's made the problem even worse.
Robert - Police forces around the UK also disagree on the use of the technology known as live facial recognition. The Met uses facial recognition to find offenders on watchlists, but Scottish police have halted its use.
Stephanie - Right now, our experience of this technology who's using it and how it's even discussed in law differs depending on your postcode.
Gareth - And another reason why facial ID has been so controversial is that some of these police forces have been rolling it out before there was a regulatory framework in effect to protect us and, if necessary, them.
Robert - This lack of legal framework also concerns Lord Clement-Jones who debated the issued last week in the house of Lords.
Lord Clement-Jones - And the general conclusion was that there was no single piece of legislation that really covered the use of live facial recognition. It's very easy to say, we need to ban this technology and I'm not quite in that camp. What I want to see, and this was the common ground, is a review. We want to see what basis there should be for legislation, we want to see how the technology performs, and then we want to be able to decide whether we should ban it or, whether there are some uses to which it could be put with the right framework.
Robert - It's hard to ignore the distinct advantages facial recognition carries. It's fast and hands free. The ability to accurately and instantly identify a fugitive in a crowd would make the world a safer place.
Gareth - There was bound to be a trade off between our liberties and our security. We should be having conversations that are diverse, where a wide range of people are coming to the table with their views and their issues.
Stephanie - I would want to be hearing from scientists, the people who manufactured this tech, from the military, from the police, from medical professionals, from civil liberties groups. And I think it's the first step on a long journey that we have to have in the United Kingdom.
Robert - Lord Clement-Jones is optimistic.
Lord Clement-Jones - The public ought to take away from this debate, that there are a great many parliamentarians concerned about the use of new technology without proper oversight. But they should put pressure on their own MPs, to say, well, what is happening much more seriously.
Robert - It's clear then that we need to have this discussion sooner rather than later. In the meantime, though, I'm going to keep using my face to unlock my phone. I'm not sure where the line in the sand is, but for me, it's a bit past this level of convenience.
New Surveillance Code Incompatible with Human Rights
Recently the Government Introduced a revised Surveillance Camera Code of Practice which it claims make the police's use of live facial recognition compliant with the Bridges Case. This is my my speech on the regret motion I tabled in response with very helpful support from Liberty.
That this House regrets the Surveillance Camera Code of Practice because (1) it does not constitute a legitimate legal or ethical framework for the police’s use of facial recognition technology, and (2) it is incompatible with human rights requirements surrounding such technology.
My Lords, I have raised the subject of live facial recognition many times in this House and elsewhere, most recently last November, in connection with its deployment in schools. Following an incredibly brief consultation exercise, timed to coincide with the height of the summer holidays last year, the Government laid an updated Surveillance Camera Code of Practice, pursuant to the Protection of Freedoms Act 2012, before both Houses on 16 November last year, which came into effect on 12 January 2022.
The subject matter of this code is of great importance. The last Surveillance Camera Commissioner did a survey shortly before stepping down, and found that there are over 6,000 systems and 80,000 cameras in operation across 183 local authorities. The UK is now the most camera-surveilled country in the western world. According to recently published statistics, London remains the third most surveilled city in the world, with 73 surveillance cameras for every 1,000 people. We are also faced with a rising tide of the use of live facial recognition for surveillance purposes.
Let me briefly give a snapshot of the key arguments why this code is insufficient as a legitimate legal or ethical framework for the police’s use of facial recognition technology and is incompatible with human rights requirements surrounding such technology. The Home Office has explained that changes were made mainly to reflect developments since the code was first published, including changes introduced by legislation such as the Data Protection Act 2018 and those necessitated by the successful appeal of Councillor Ed Bridges in the Court of Appeal judgment on police use of live facial recognition issued in August 2020, which ruled that that South Wales Police’s use of AFR—automated facial recognition—had not in fact been in accordance with the law on several grounds, including in relation to certain convention rights, data protection legislation and the public sector equality duty.
During the fifth day in Committee on the Police, Crime, Sentencing and Courts Bill last November, the noble Baroness, Lady Williams of Trafford, the Minister, described those who know about the Bridges case as “geeks”. I am afraid that does not minimise its importance to those who want to see proper regulation of live facial recognition. In particular, the Court of Appeal held in Bridges that South Wales Police’s use of facial recognition constituted an unlawful breach of Article 8—the right to privacy—as it was not in accordance with law. Crucially, the Court of Appeal demanded that certain bare minimum safeguards were required for the question of lawfulness to even be considered.
The previous surveillance code of practice failed to provide such a basis. This, the updated version, still fails to meet the necessary standards, as the code allows wide discretion to individual police forces to develop their own policies in respect of facial recognition deployments, including the categories of people included on a watch-list and the criteria used to determine when to deploy. There are but four passing references to facial recognition in the code itself. This scant guidance cannot be considered a suitable regulatory framework for the use of facial recognition.
There is, in fact, no reference to facial recognition in the Protection of Freedoms Act 2012 itself or indeed in any other UK statute. There has been no proper democratic scrutiny over the code and there remains no explicit basis for the use of live facial recognition by police forces in the UK. The forthcoming College of Policing guidance will not satisfy that test either.
There are numerous other threats to human rights that the use of facial recognition technology poses. To the extent that it involves indiscriminately scanning, mapping and checking the identity of every person within the camera’s range—using their deeply sensitive biometric data—LFR is an enormous interference with the right to privacy under Article 8 of the ECHR. A “false match” occurs where someone is stopped following a facial recognition match but is not, in fact, the person included on the watch-list. In the event of a false match, a person attempting to go about their everyday life is subject to an invasive stop and may be required to show identification, account for themselves and even be searched under other police powers. These privacy concerns cannot be addressed by simply requiring the police to delete images captured of passers-by or by improving the accuracy of the technology.
The ECHR requires that any interference with the Article 10 right to freedom of expression or the Article 11 right to free association is in accordance with law and both necessary and proportionate. The use of facial recognition technology can be highly intimidating. If we know our faces are being scanned by police and that we are being monitored when using public spaces, we are more likely to change our behaviour and be influenced on where we go and who we choose to associate with.
Article 14 of the ECHR ensures that no one is denied their rights because of their gender, age, race, religion or beliefs, sexual orientation, disability or any other characteristic. Police use of facial recognition gives rise to two distinct discrimination issues: bias inherent in the technology itself and the use of the technology in a discriminatory way.
Liberty has raised concerns regarding the racial and socioeconomic dimensions of police trial deployments thus far—for example, at Notting Hill Carnival for two years running as well as twice in the London Borough of Newham. The disproportionate use of this technology in communities against which it “underperforms” —according to its proponent’s standards—is deeply concerning.
As regards inherent bias, a range of studies have shown facial recognition technology disproportionately misidentifies women and BAME people, meaning that people from these groups are more likely to be wrongly stopped and questioned by police and to have their images retained as the result of a false match.
The Court of Appeal determined that South Wales Police had failed to meet its public sector equality duty, which requires public bodies and others carrying out public functions to have due regard to the need to eliminate discrimination. The revised code not only fails to provide any practical guidance on the public sector equality duty but, given the inherent bias within facial recognition technology, it also fails to emphasise the rigorous analysis and testing required by the public sector equality duty.
The code itself does not cover anybody other than police and local authorities, in particular Transport for London, central government and private users where there have also been concerning developments in terms of their use of police data. For example, it was revealed that the Trafford Centre in Manchester scanned the faces of every visitor for a six-month period in 2018, using watch-lists provided by Greater Manchester Police—approximately 15 million people. LFR was also used at the privately owned but publicly accessible site around King’s Cross station. Both the Met and British Transport Police had provided images for their use, despite originally denying doing so.
It is clear from the current and potential future human rights impact of facial recognition that this technology has no place on our streets. In a recent opinion, the former Information Commissioner took the view that South Wales Police had not ensured that a fair balance had been struck between the strict necessity of the processing of sensitive data and the rights of individuals.
The breadth of public concern around this issue is growing clearer by the day. Several major cities in the US have banned the use of facial recognition and the European Parliament has called for a ban on police use of facial recognition technology in public places and predictive policing. In response to the Black Lives Matter uprisings in 2020, Microsoft, IBM and Amazon announced that they would cease selling facial recognition technology to US law enforcement bodies. Facebook, aka Meta, also recently announced that it will be shutting down its facial recognition system and deleting the “face prints” of more than a billion people after concerns were raised about the technology.
In summary, it is clear that the Surveillance Camera Code of Practice is an entirely unsuitable framework to address the serious rights risk posed by the use of live facial recognition in public spaces in the UK. As I said in November in the debate on facial recognition technology in schools, the expansion of such tools is a “short cut to a widespread surveillance state.”—[Official Report, 4/11/21; col. 1404.]
Public trust is crucial. As the Biometrics and Surveillance Camera Commissioner said in a recent blog: “What we talk about in the end, is how people will need to be able to have trust and confidence in the whole ecosystem of biometrics and surveillance”.
I have on previous occasions, not least through a Private Member’s Bill, called for a moratorium on the use of LFR. In July 2019, the House of Commons Science and Technology Committee published a report entitled The Work of the Biometrics Commissioner and the Forensic Science Regulator. It repeated a call made in an earlier 2018 report that “automatic facial recognition should not be deployed until concerns over the technology’s effectiveness and potential bias have been fully resolved.” The much-respected Ada Lovelace Institute has also called for a “a voluntary moratorium by all those selling and using facial recognition technology”, which would “enable a more informed conversation with the public about limitations and appropriate safeguards.”
Rather than update toothless codes of practice to legitimise the use of new technologies like live facial recognition, the UK should have a root and branch surveillance camera review which seeks to increase accountability and protect fundamental rights. The review should investigate the novel rights impacts of these technologies, the scale of surveillance we live under and the regulations and interventions needed to uphold our rights.
We were reminded by the leader of the Opposition on Monday about what Margaret Thatcher said, and I also said this to the Minister earlier this week:
“The first duty of Government is to uphold the law. If it tries to bob and weave and duck around that duty when it’s inconvenient, if Government does that, then so will the governed and then nothing is safe—not home, not liberty, not life itself.”
It is as apposite for this debate as it was for that debate on the immigration data exemption. Is not the Home Office bobbing and weaving and ducking precisely as described by the late Lady Thatcher?
The Road to Trustworthy Use of Healthcare Data: Good Governance and a Sovereign Health Fund
I recently did a Guest blog for Future Care Capital on Data in the Health and Care Bill
https://futurecarecapital.org.uk/latest/guest-blog-lord-clement-jones-3/
The Health and Social Care Bill currently passing through Parliament potentially contains major changes to the way that our public health data will be treated with the merging of NHS Digital and NHSX with NHS England. Important amendments are needed.
All of us recognize the benefits of using health data which arises in the course of treating patients in the NHS for research that will lead to new and improved treatments for disease and for the purposes of public health and health services planning. It has in particular been of great benefit in helping to improve the treatment of COVID during the pandemic.
The introduction of Shared Care Records is a key part of this revolution. These allow staff involved in a person’s care, to access health and care records to provide better joined-up care across different parts of the health and social care system.
But increasingly the Government and, I am sad to say, agencies such as NHS Digital and NHSX seem to think that they can share patient data with private companies with barely a nod to patient consent and proper principles of data protection.
We can go back to December 2019 and the discovery by Privacy International that the Department of Health and Social Care had agreed to give free access to NHS England health data to Amazon allowing them to develop, advertise, and sell new products, applications, cloud-based services and/or distributed software.
Take the situation last year where we had what has been described as the biggest data grab in the history of the health service of GP patient data. In May, NHS Digital with minimal consultation, explanation or publicity and without publication of any data protection impact assessment (DPIA) published its plans to share patients’ primary health care data collected by GP practices giving patients just 6 weeks to opt-out.
As a result of campaigners’ efforts, including a group of Tower Hamlets GPs who refused to hand over patient data, Ministers first announced that implementation would be delayed until 1 September and then by letter to GP’s in July put the whole scheme on hold including data collection.
As a result of this bungled approach more than a million people have now opted out of NHS data-sharing.
The government have had to revise their approach and devise a simpler opt-out system and commit to the publication of a data impact assessment before data collection starts again. They have had to commit that access to GP data will only be via a Trusted Research Environment (TRE) and commit to properly thought through engagement and communications strategy.
But if we areto retain and build trust in the use of health data, we need a new governance framework.
The Government must gain society’s trust through honesty, transparency and rigorous safeguards. The individual must have the right to choose whether to share their data or not and understand how it will be used.
We need to retain NHS Digital’s statutory safe haven functions separate from NHS England and all health data must be held anonymously and accessed through an accredited data access environment, designed to cover not only the promised TRE but also where data is used for planning purposes.
The data held by the NHS must be considered as a unique source of value held for national benefit. Retaining control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain its position as a leading life science economy and innovator.
We need a guarantee that our health data will be used in an ethical manner, assigned its true value and used for the benefit of UK healthcare. Any proceeds from data collaborations that the Government agrees to, integral to any replacement or new trade deals, should be ring-fenced for reinvestment in the health and care system with a Sovereign Health Fund.
Those I believe are the right foundations for health data governance and, alongside other members of the Lords such Lord Hunt of Kings Heath and Baroness Cumberledge -both with enormous experience of the health service- I will be supporting and tabling amendments during the passage of the Bill to secure them.