New Surveillance Code Incompatible with Human Rights
Recently the Government Introduced a revised Surveillance Camera Code of Practice which it claims make the police's use of live facial recognition compliant with the Bridges Case. This is my my speech on the regret motion I tabled in response with very helpful support from Liberty.
That this House regrets the Surveillance Camera Code of Practice because (1) it does not constitute a legitimate legal or ethical framework for the police’s use of facial recognition technology, and (2) it is incompatible with human rights requirements surrounding such technology.
My Lords, I have raised the subject of live facial recognition many times in this House and elsewhere, most recently last November, in connection with its deployment in schools. Following an incredibly brief consultation exercise, timed to coincide with the height of the summer holidays last year, the Government laid an updated Surveillance Camera Code of Practice, pursuant to the Protection of Freedoms Act 2012, before both Houses on 16 November last year, which came into effect on 12 January 2022.
The subject matter of this code is of great importance. The last Surveillance Camera Commissioner did a survey shortly before stepping down, and found that there are over 6,000 systems and 80,000 cameras in operation across 183 local authorities. The UK is now the most camera-surveilled country in the western world. According to recently published statistics, London remains the third most surveilled city in the world, with 73 surveillance cameras for every 1,000 people. We are also faced with a rising tide of the use of live facial recognition for surveillance purposes.
Let me briefly give a snapshot of the key arguments why this code is insufficient as a legitimate legal or ethical framework for the police’s use of facial recognition technology and is incompatible with human rights requirements surrounding such technology. The Home Office has explained that changes were made mainly to reflect developments since the code was first published, including changes introduced by legislation such as the Data Protection Act 2018 and those necessitated by the successful appeal of Councillor Ed Bridges in the Court of Appeal judgment on police use of live facial recognition issued in August 2020, which ruled that that South Wales Police’s use of AFR—automated facial recognition—had not in fact been in accordance with the law on several grounds, including in relation to certain convention rights, data protection legislation and the public sector equality duty.
During the fifth day in Committee on the Police, Crime, Sentencing and Courts Bill last November, the noble Baroness, Lady Williams of Trafford, the Minister, described those who know about the Bridges case as “geeks”. I am afraid that does not minimise its importance to those who want to see proper regulation of live facial recognition. In particular, the Court of Appeal held in Bridges that South Wales Police’s use of facial recognition constituted an unlawful breach of Article 8—the right to privacy—as it was not in accordance with law. Crucially, the Court of Appeal demanded that certain bare minimum safeguards were required for the question of lawfulness to even be considered.
The previous surveillance code of practice failed to provide such a basis. This, the updated version, still fails to meet the necessary standards, as the code allows wide discretion to individual police forces to develop their own policies in respect of facial recognition deployments, including the categories of people included on a watch-list and the criteria used to determine when to deploy. There are but four passing references to facial recognition in the code itself. This scant guidance cannot be considered a suitable regulatory framework for the use of facial recognition.
There is, in fact, no reference to facial recognition in the Protection of Freedoms Act 2012 itself or indeed in any other UK statute. There has been no proper democratic scrutiny over the code and there remains no explicit basis for the use of live facial recognition by police forces in the UK. The forthcoming College of Policing guidance will not satisfy that test either.
There are numerous other threats to human rights that the use of facial recognition technology poses. To the extent that it involves indiscriminately scanning, mapping and checking the identity of every person within the camera’s range—using their deeply sensitive biometric data—LFR is an enormous interference with the right to privacy under Article 8 of the ECHR. A “false match” occurs where someone is stopped following a facial recognition match but is not, in fact, the person included on the watch-list. In the event of a false match, a person attempting to go about their everyday life is subject to an invasive stop and may be required to show identification, account for themselves and even be searched under other police powers. These privacy concerns cannot be addressed by simply requiring the police to delete images captured of passers-by or by improving the accuracy of the technology.
The ECHR requires that any interference with the Article 10 right to freedom of expression or the Article 11 right to free association is in accordance with law and both necessary and proportionate. The use of facial recognition technology can be highly intimidating. If we know our faces are being scanned by police and that we are being monitored when using public spaces, we are more likely to change our behaviour and be influenced on where we go and who we choose to associate with.
Article 14 of the ECHR ensures that no one is denied their rights because of their gender, age, race, religion or beliefs, sexual orientation, disability or any other characteristic. Police use of facial recognition gives rise to two distinct discrimination issues: bias inherent in the technology itself and the use of the technology in a discriminatory way.
Liberty has raised concerns regarding the racial and socioeconomic dimensions of police trial deployments thus far—for example, at Notting Hill Carnival for two years running as well as twice in the London Borough of Newham. The disproportionate use of this technology in communities against which it “underperforms” —according to its proponent’s standards—is deeply concerning.
As regards inherent bias, a range of studies have shown facial recognition technology disproportionately misidentifies women and BAME people, meaning that people from these groups are more likely to be wrongly stopped and questioned by police and to have their images retained as the result of a false match.
The Court of Appeal determined that South Wales Police had failed to meet its public sector equality duty, which requires public bodies and others carrying out public functions to have due regard to the need to eliminate discrimination. The revised code not only fails to provide any practical guidance on the public sector equality duty but, given the inherent bias within facial recognition technology, it also fails to emphasise the rigorous analysis and testing required by the public sector equality duty.
The code itself does not cover anybody other than police and local authorities, in particular Transport for London, central government and private users where there have also been concerning developments in terms of their use of police data. For example, it was revealed that the Trafford Centre in Manchester scanned the faces of every visitor for a six-month period in 2018, using watch-lists provided by Greater Manchester Police—approximately 15 million people. LFR was also used at the privately owned but publicly accessible site around King’s Cross station. Both the Met and British Transport Police had provided images for their use, despite originally denying doing so.
It is clear from the current and potential future human rights impact of facial recognition that this technology has no place on our streets. In a recent opinion, the former Information Commissioner took the view that South Wales Police had not ensured that a fair balance had been struck between the strict necessity of the processing of sensitive data and the rights of individuals.
The breadth of public concern around this issue is growing clearer by the day. Several major cities in the US have banned the use of facial recognition and the European Parliament has called for a ban on police use of facial recognition technology in public places and predictive policing. In response to the Black Lives Matter uprisings in 2020, Microsoft, IBM and Amazon announced that they would cease selling facial recognition technology to US law enforcement bodies. Facebook, aka Meta, also recently announced that it will be shutting down its facial recognition system and deleting the “face prints” of more than a billion people after concerns were raised about the technology.
In summary, it is clear that the Surveillance Camera Code of Practice is an entirely unsuitable framework to address the serious rights risk posed by the use of live facial recognition in public spaces in the UK. As I said in November in the debate on facial recognition technology in schools, the expansion of such tools is a “short cut to a widespread surveillance state.”—[Official Report, 4/11/21; col. 1404.]
Public trust is crucial. As the Biometrics and Surveillance Camera Commissioner said in a recent blog: “What we talk about in the end, is how people will need to be able to have trust and confidence in the whole ecosystem of biometrics and surveillance”.
I have on previous occasions, not least through a Private Member’s Bill, called for a moratorium on the use of LFR. In July 2019, the House of Commons Science and Technology Committee published a report entitled The Work of the Biometrics Commissioner and the Forensic Science Regulator. It repeated a call made in an earlier 2018 report that “automatic facial recognition should not be deployed until concerns over the technology’s effectiveness and potential bias have been fully resolved.” The much-respected Ada Lovelace Institute has also called for a “a voluntary moratorium by all those selling and using facial recognition technology”, which would “enable a more informed conversation with the public about limitations and appropriate safeguards.”
Rather than update toothless codes of practice to legitimise the use of new technologies like live facial recognition, the UK should have a root and branch surveillance camera review which seeks to increase accountability and protect fundamental rights. The review should investigate the novel rights impacts of these technologies, the scale of surveillance we live under and the regulations and interventions needed to uphold our rights.
We were reminded by the leader of the Opposition on Monday about what Margaret Thatcher said, and I also said this to the Minister earlier this week:
“The first duty of Government is to uphold the law. If it tries to bob and weave and duck around that duty when it’s inconvenient, if Government does that, then so will the governed and then nothing is safe—not home, not liberty, not life itself.”
It is as apposite for this debate as it was for that debate on the immigration data exemption. Is not the Home Office bobbing and weaving and ducking precisely as described by the late Lady Thatcher?
The Road to Trustworthy Use of Healthcare Data: Good Governance and a Sovereign Health Fund
I recently did a Guest blog for Future Care Capital on Data in the Health and Care Bill
https://futurecarecapital.org.uk/latest/guest-blog-lord-clement-jones-3/
The Health and Social Care Bill currently passing through Parliament potentially contains major changes to the way that our public health data will be treated with the merging of NHS Digital and NHSX with NHS England. Important amendments are needed.
All of us recognize the benefits of using health data which arises in the course of treating patients in the NHS for research that will lead to new and improved treatments for disease and for the purposes of public health and health services planning. It has in particular been of great benefit in helping to improve the treatment of COVID during the pandemic.
The introduction of Shared Care Records is a key part of this revolution. These allow staff involved in a person’s care, to access health and care records to provide better joined-up care across different parts of the health and social care system.
But increasingly the Government and, I am sad to say, agencies such as NHS Digital and NHSX seem to think that they can share patient data with private companies with barely a nod to patient consent and proper principles of data protection.
We can go back to December 2019 and the discovery by Privacy International that the Department of Health and Social Care had agreed to give free access to NHS England health data to Amazon allowing them to develop, advertise, and sell new products, applications, cloud-based services and/or distributed software.
Take the situation last year where we had what has been described as the biggest data grab in the history of the health service of GP patient data. In May, NHS Digital with minimal consultation, explanation or publicity and without publication of any data protection impact assessment (DPIA) published its plans to share patients’ primary health care data collected by GP practices giving patients just 6 weeks to opt-out.
As a result of campaigners’ efforts, including a group of Tower Hamlets GPs who refused to hand over patient data, Ministers first announced that implementation would be delayed until 1 September and then by letter to GP’s in July put the whole scheme on hold including data collection.
As a result of this bungled approach more than a million people have now opted out of NHS data-sharing.
The government have had to revise their approach and devise a simpler opt-out system and commit to the publication of a data impact assessment before data collection starts again. They have had to commit that access to GP data will only be via a Trusted Research Environment (TRE) and commit to properly thought through engagement and communications strategy.
But if we areto retain and build trust in the use of health data, we need a new governance framework.
The Government must gain society’s trust through honesty, transparency and rigorous safeguards. The individual must have the right to choose whether to share their data or not and understand how it will be used.
We need to retain NHS Digital’s statutory safe haven functions separate from NHS England and all health data must be held anonymously and accessed through an accredited data access environment, designed to cover not only the promised TRE but also where data is used for planning purposes.
The data held by the NHS must be considered as a unique source of value held for national benefit. Retaining control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain its position as a leading life science economy and innovator.
We need a guarantee that our health data will be used in an ethical manner, assigned its true value and used for the benefit of UK healthcare. Any proceeds from data collaborations that the Government agrees to, integral to any replacement or new trade deals, should be ring-fenced for reinvestment in the health and care system with a Sovereign Health Fund.
Those I believe are the right foundations for health data governance and, alongside other members of the Lords such Lord Hunt of Kings Heath and Baroness Cumberledge -both with enormous experience of the health service- I will be supporting and tabling amendments during the passage of the Bill to secure them.
Artificial Intelligence and Intellectual Property: incentivize human innovation and creation
Christian Gordon-Pullar and I recently responded to the Government's Consultation Paper on Artificial Intelligence and Intellectual Property: Copyright and Patents;
This is what we said;
As Artificial Intelligence (AI) becomes embedded in people’s lives, the United Kingdom (UK) is at a pivotal inflection point. The UK’s National AI Strategy rightly recognises Artificial Intelligence (AI) as the ‘fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life’ and estimates that AI could deliver a 10% increase in UK GDP in 2030.
The UK is, potentially, well-positioned to be a world-leader in AI, over time, as a genuine research and innovation powerhouse, a hub for global talent and a progressive regulatory and business environment. Achieving this will involve attracting, retaining and incentivising business to create, protect and locate investment efforts in the UK. The UK has the potential to gain impetus from a position of strength in AI research, enterprise and ethical regulation, and, with its recent history of support for AI, it stands among the best in the world. To attract talent, incentivise investment in AI-powered or AI-focused innovation, influence global markets and shape global governance, the nature of the Intellectual Property regime in the UK relating to AI will be crucial.
Specifically in relation to the three headline areas of focus in the Consultation Paper:
1. Copyright: Computer Generated Works
The UK is one of only a handful of countries to protect works generated by a computer where there is no human creator. The “author” of a “computer-generated work” (CGW) is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. Protection lasts for 50 years from the date the work is made.
In the same way the owner of the literary work and the copyright subsisting in it, if it were original, would be, alternatively:
- a) the operator of an AI system (aligning its inputs and selecting its datasets and data fields); or
- b) their employer, if employed; or
- c) a third party if the operator has a contract assigning such rights outside of employment context.
To be original, a work must be an author’s or artist’s own intellectual creation, reflecting their personality (see the decisions of the EU Court of Justice in Infopaq, C-5/08, and Painer, C-145/10).
At the other end of the scale, a human who simply provides training datato an AI system and presses “analyse” is unlikely to be considered the author of the resulting work.
In this way we believe that the existing copyright legislative framework under the CDPA adequately addresses the current needs of AI developers. New entrants and disruptors can, in our opinion, work within the existing framework which adequately caters for the existing and foreseeable future.
Indeed realistic hypothetical future scenarios may well involve an AI system having access to content from global providers and creating derivative content (whether under licence or not) and doing so at great speed with little or no investment or “sweat of the brow” and, therefore it can be argued that in fact the level of protection should be reduced to be proportionate to the time effort and investment involved.
Further, we would also urge that copyright law is clarified to ensure that it is the operator (or his /her employer) of the AI system (that is, the person that guides the AI system to apply certain data or parameters and shapes the outcome) that is the copyright owner and not the owner of the AI system.
One can see a future scenario where “AI-as-a-Service” is offered whereby a content user or hirer of the AI system is allowed to apply their own rules, parameters and data/inputs to a problem whilst ‘hiring’ or using the AI system as a service (just as SaaS exists today). The operator of the AI system (not owner of the AI system ) should in that case be the first owner of the copyright in the resulting work (subject to contractual rights that may be transferred, licensed or otherwise assigned thereafter).
Ranking Options in order:
- We would therefore urge the IPO to choose Option 2 – a lesser term of copyright protection should apply e.g. 5-15 Years for AI generated Copyright works e.g. music, art etc. which, as described above, require little investment or “sweat of the brow”
- Failing 1, we would urge the IPO to choose option 0 – Make no legal change.
- Option 1, removing the protection is not a viable or desirable option in our opinion.
2. Copyright: Text and Data Mining
The Government rightly believe that that there is a need to promote and further enable AI development. This must however be balanced with a commensurate and proportionate recognition of the critical importance and value of data as raw material.
AI developers rely on high-quality data to develop reliable and innovative AI-driven inventions and applications. Licensing regimes under existing IP law are designed to cater for the needs of AI developers.
By the same token content and data-driven businesses themselves have seen a rapid increase in the use of AI technology and machine-learning, either for news summaries, data gathering efforts, translations for research and journalistic purposes or to assist organisations to save time by processing large amounts of text and other data at scale and speed. Digital technologies, including AI, are and will continue to be of critical importance to these industries, helping create content, new products and value-added services to deliver to a broad range of corporate and retail clients. Whether in news media or cross-industry research, publishers are themselves investing in AI; continued collaboration with start-ups and academia are creating tailored materials for wide populations of beneficiaries (students, academia, research organisations, and even marketers of consumer publishing products).
It is of paramount importance to balance the needs of future AI development with the legal, commercial and economic rights of data-owners and the need to incentivize new AI adoption with recognition of the rights of existing content owners.
We have however seen no evidence the existing copyright legislative framework fails to adequately address the current needs of AI developers. Moreover it is particularly important, in our view, to ensure that the development of AI is not enabled at the expense of the underlying investment by copyright and data-owners. (see endnote 1).
If the content owners of underlying data materials withhold the licensing of, or access to, such materials or attempt to price them at a level that is unfair, the answer is for Government via the Competition and Markets Authority/the new Digital Markets Unit (or indeed other regulators who form part of the Digital Regulation Cooperation Forum) to put in place competition measures to ensure there is a clear legal recourse in such situations.
In summary we do not believe that current copyright law creates a disparity between the interests of AI developers and investors and content owners. The existing copyright regime under the CDPA reflects a balance that fairly protects those investing in data creation without giving an unfair advantage to technology companies offering AI-enabled content creation services. In particular the current framework provides a balanced regime for data and text mining and we believe no changes are required at present. However, we recommend a watching brief, and that the IPO consider and take account of changes to copyright laws in other countries that may make it more attractive for AI operators to base their operations in those extraterritorial locations so that text and data mining activities, machine learning, etc. become more easily performed elsewhere or permitted with incentives not offered in UK.
Ranking Options in order:
- We would therefore urge the IPO to elect Option 0 – Make no legal change. No other option is currently justifiable given the lack of evidence of an adverse commercial environment preventing access to data or text by AI-enabled content creators. Should the Government or IPO consider that there needs to be increased access to data at lower cost, it should look at other policy levers to stimulate such uptake, such as providing tax incentives for content owners to license content, rather than reducing copyright protection.
- We also concur with industry leads who consider that forcing rightsholders to opt in to protection, as suggested in option 3 would be complicated and costly for many businesses and industries who own literally millions of works, when licensing is far simpler, and would be against the spirit of international treaties on copyright.
3. Patents:
If UK patents were to protect AI-devised inventions, how should the inventor be identified, and who should be the patent owner? What effects does this have on incentivising and rewarding AI-devised inventions?
As we described above the author and first owner of any AI-assisted or created work will be the person who creates the work or their employer if that person is an employee or or a third party if the operator has a contract assigning such rights outside of employment context
As the emphasis in copyright law suggests, creating a ‘work’ is in essence a human activity. This is given additional support by the reference to the automatic transfer of copyright from employee to employer; an AI system cannot be said to be an employee.
Similar principles in our view apply to patents as with copyright. For patentability the applicant inventor must be a ‘person’.
Authoritative guidance on how AI-created inventions fit into this scheme, where no human inventor is mentioned is given in the decision in Thaler v Comptroller General of Patents Trade Marks and Designs (aka ‘Thaler’ or ‘DABUS case’) and in particular in our view in the statements by Lord Justice Birss (L.J. Birss) in his dissenting opinion (See paragraphs numbered 8, 58 78 et seq. of the DABUS case, and the Conclusion).
In summary, L.J. Birss. set out his views on the lower courts’ erroneous interpretations of the law and in conclusion stated:
- The inventor of an invention under the 1977 Act is the person who actually devised the invention.
- Dr Thaler has complied with his obligations under s13(2) of the 1977 Act because he has given a statement identifying the person(s) he believes the inventor to be (s13(2)(a)) and indicating the derivation of his right to be granted the patent (s13(2)(b)).
- It is no part of the Comptroller's functions under the 1977 Act to deem the applications as withdrawn simply because the applicant's statement under s13(2)(a) does not identify any person who is the inventor. Since the statement honestly reflects the applicant's belief, it satisfies s13(2)(a).
- It is no part of the Comptroller's functions under the 1977 Act to in any way be satisfied that the applicant's claim to the right to be granted the patent is good. In granting a patent to an applicant the Comptroller is not ratifying the applicant's claim to derivation. Dr Thaler's asserted claim, if correct, would mean he was entitled to the grant. Therefore the statement satisfies s13(2)(b).
- The fact that the creator of the inventions in this case was a machine is no impediment to patents being granted to this applicant.
All three judges in Thaler agreed that under the Patents Act (PA) 1977 an inventor must be a person, and as a machine is not a person it, therefore, cannot be an "inventor" for the purposes of section 7(2) of the Act. L.J. Birss however dissented on the crucial point whether it was an impediment to the grant of an application that the creator of an invention was a machine, as such. He stated that it was simply that a machine inventor cannot be treated as an inventor for the purpose of granting the application.
In Australia the Court has taken a slightly different view but there, the law is different. As L.J. Birss in Thaler remarked in his judgment:
‘After the hearing the appellant sent the court a copy of the judgment of BeachJ of 30th July 2021 in the Federal Court of Australia Thaler v Commissoner of Patents [2021] FCA 879. The judgment deals with another parallel case about applications for the same inventions. Beach J decided the case in Dr Thaler's favour. However yet again the relevant legislation is quite distinct from that in the UK. The applications reached the Australian Patent Office via the Patent Cooperation Treaty (PCT), which meant that a local rule (reg 3.2C(2)(aa)) applied which requires the applicant to provide the name of the inventor. That rule is in different terms from s13(2) and the present case is not a PCT application ( i.e. in Australia the name of the inventor must be provided unlike under UK legislation). If it were then the operation of s13(2) would be affected by a deeming provision (s89B(1)(c)) which we do not have to consider”.
We believe that in principle LJ Birss is correct and that the patentability of such inventions where created by AI, or with the assistance of AI, provided the basic criteria under the relevant legislation are met, has been established. There is therefore absolutely no need for the patent system to identify AI as the inventor or to create entirely new rights
If the IPO takes the view or on appeal it is established that the law has not been correctly expressed by LJ Birss, it should be clarified to accord with his judgment. Failing that , for instance if AI systems themselves are treated as inventors, in our view, the system of innovation and inventorship in the UK will be eroded, the benefits and incentives for human inventors will be reduced, and ultimately firms could invest more in AI systems than in human innovation.
Without changes in taxes on AI-inventorship and commensurate incentives to balance the negative impact, such a change would be detrimental to the ethos of the patent system and its focus on “a person” being the inventor mentioned in a patent application.
Whilst it is unclear exactly what the future regulation of AI and associated IP rights will look like in the UK at this stage, it is clear however that an internationally harmonised approach to the protection and recognition accorded to AI generated inventions would be desirable.
it is also our view right in principle, to cite L.J. Birss, that ‘there is no rule of law that a new intangible produced by existing tangible property is the property of the owner of the tangible property’, as Dr Thaler contended, and certainly no rule that the property contemplated by section 7(2)(b) in an invention created by a machine is owned by the owner of the machine. Accordingly, the hearing officer and the judge were correct to hold that Dr Thaler is not entitled to apply for patents in respect of the inventions given the premise that DABUS made the inventions’.
In our view, as with AI creations for copyright purposes, the key is the operation and control of the machine/AI producing the invention not ownership of the AI itself.
Ranking Options in order:
- We would therefore urge the IPO to elect Option 1 whereby it is clarified that “Inventor” includes a human responsible for the inventive activity of the AI system that lead to the invention or which devises inventions (e.g. where that humanoperator selects or guides the AI with relevant data, parameters, data-sets or programming logic for the AI’s function or purpose, which leads it to create an invention). This would also cater for the analogous scenario (to that mentioned above under 1, where AI becomes prevalent in the first instance as “AI-as-a-service”, whereupon there should be a presumption of ownership by the AI Operator (not the AI-system owner) and where transfers of ownership and rights can be addressed contractually at the point of use where AI is used ‘…as-a-service’.
- As a second-best option, as requested-particularly if the opinion of LJ Birss is subsequently confirmed by the Supreme Court - we would advocate Option 0 – no change.
Endnotes
- Reference: In Authors Guild v. Google 721 F.3d 132 (2d Cir. 2015), a copyright case heard in the United States District Court for the Southern District of New York, and on appeal to the United States Court of Appeals for the Second Circuit between 2005 and 2015. The case concerned fair use in copyright law and the transformation of printed copyrighted books into an online searchable database through scanning and digitization. The case centered on the legality of the Google Book Search (originally named as Google Print) Library Partner project that had been launched in 2003. Though there was general agreement that Google's attempt to digitise books through scanning and computer-aided recognition for searching online was seen as a transformative step for libraries, many authors and publishers had expressed concern that Google had not sought their permission to make scans of the books still under copyright and offered them to users.
- Two separate lawsuits, including one from three authors represented by the Authors Guild and another by Association of American Publishers, were filed in 2005 charging Google with copyright infringement. Google worked with the litigants in both suits to develop a settlement agreement (the Google Book Search Settlement Agreement) that would have allowed it to continue the program though paying out for works it had previously scanned, creating a revenue program for future books that were part of the search engine, and allowing authors and publishers to opt-out. The settlement received much criticism as it also applied to all books worldwide, included works that may have been out of print but still under copyright, and may have violated antitrust aspects given Google's dominant position within the Internet industry. A reworked proposal to address some of these concerns was met with similar criticism, and ultimately the settlement was rejected by 2011, allowing the two lawsuits to be joined for a combined trial. In late 2013, after the class action status was challenged, the District Court granted summary judgement in favour of Google, dismissing the lawsuit and affirming the Google Books project met all legal requirements for fair use. The Second Circuit Court of Appeal upheld the District Court's summary judgement in October 2015, ruling Google's "project provides a public service without violating intellectual property law."[1] The U.S. Supreme Court subsequently denied a petition to hear the case.[2]
A big thank you to Christian for all his hard work on this response.
Lord C-J : Protect Pure Maths
During the Report Stage of the Advanced Research and Invention Agency Bill I spoke in favour of changes to the bill to ensure that pure maths research was included in the definition of scientific research.
This is the recording
https://twitter.com/i/status/1470883981973463049
And this is what I said:
My Lords, I have signed and I support Amendments 12, 13 and 14. As someone immersed in issues relating to AI, machine learning and the application of algorithms to decision-making over the years, I, too, support Protect Pure Maths in its campaign to protect pure maths and advance the mathematical sciences in the UK—and these amendments, tabled by the noble and gallant Lord, Lord Craig, reflect that.
The campaign points out that pure maths has been a great British success story, with Alan Turing, Andrew Wiles and Roger Penrose, the Nobel Prize winner—and, of course, more recently Hannah Fry has popularised mathematics. Stephen Hawking was a great exemplar, too. However, despite its value to society, maths does not always receive the funding and support that it warrants. Giving new funding to AI, for instance, risks overlooking the fundamental importance of maths to technology.
As Protect Pure Maths says, the 2004 BEIS guidelines on research and development, updated in 2010, currently limit the definition of science and research and development for tax purposes to the systematic study of the nature and behaviour of the physical and material universe. We should ensure that the ARIA Bill does not make the same mistake, and that the focus and capacity of the Bill’s provisions also explicitly include the mathematical sciences, including pure maths. Maths needs to be explicitly included as a part of scientific knowledge and research, and I very much hope that the Government accept these amendments.
Lord C-J helps to launch Rolls-Royce Aletheia Framework version 2
The Aletheia Framework is a practical one-page toolkit that guides developers, executives and boards both prior to deploying an AI, and during its use. A second version has been dwveloped by Caroline Gorski and her team at R2 Data Labs, Rolls Royce to be applicable accross a wise range of secotors.
This is how they describe it:
"It asks them to consider 32 facets of social impact, governance and trust and transparency and to provide evidence which can then be used to engage with approvers, stakeholders or auditors.
A new module added in December 2021 is a tried and tested way to identify and help mitigate the risk of bias in training data and AIs. This complements the existing five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias in service or malfunction and allow human intervention to control and correct it."
I commented on the original version of The Aletheia Framework, and it deals with many of the same areas in education as it does for Rolls-Royce in manufacturing – ethics, impact, compliance, data protection. So, I saw an equivalence there and the Institute for AI Ethics in Eduction adapted The Aletheia Framework for its needs.
Here are the two videos I made with Rolls Royce to mark the new version:
First on why practical ethics matters right now to build public trust
https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx
Second to describe how we adapted the Aletheia Framework for education
https://www.lordclementjones.org/wp-content/uploads/2021/12/Education-case-study.mp4
Launch of AI Landscape Overview: Lord C-J on AI Regulation
It was good to launch the new Artificial Intelligence Industry in the the UK Landscape Overview 2021: Companies, Investors, Influencers and Trends with the authors from Deep Knowledge Analytica nd Big Innovation Centre and my APPG AI Co Chair Stephen Metcalfe MP, Professor Stuart Risssell the Reith lecturer , Charles Kerrigan of CMS and Dr Scott Steedman of the BSI
Here is the full report online
https://mindmaps.innovationeye.com/reports/ai-in-uk
And here is what I said about AI Regulation at the launch:
A little under 5 years ago we started work on the AI Select Committee enquiry that led to our Report AI in the UK: Ready Willing and Able? The Hall/Pesenti Review of 2017 came at around the same time.
Since then many great institutions have played a positive role in the development of ethical AI. Some are newish like the Centre for Data Ethics and Innovation, the AI Council and the Office for AI; others are established regulators such as the ICO, Ofcom, the Financial Conduct Authority and the CMA whichhave put together a new Digital Regulators Cooperation Forum to pool expertise in this field. This role includes sandboxing and input from a variety of expert institutes on areas such as risk assessment, audit data trusts and standards such as the Turing, Open Data, the Ada Lovelace, the OII and the British Standards Institute. Our Intellectual Property Office too is currently grappling with issues relating to IP created by AI .
The publication of National AI strategy this Autumn is a good time to take stock of where we are heading on regulation. We need to be clear above all , as organisations such as techUK are, that regulation is not the enemy of innovation, it can in fact be the stimulus and be the key to gaining and retaining public trust around AI and its adoption so we can realise the benefits and minimise the risks.
I have personally just completed a very intense examination of the Government’s proposals on online safety where many of the concerns derive from the power of the algorithm in targetting messages and amplifying them. The essence of our recommendations revolves around safety by design and risk assessment.
As is evident from the work internationally by the Council of Europe, the OECD, UNESCO, the Global Partnership on AI and the EU with its proposal for an AI Act, in the UK we need to move forward with proposals for a risk based regulatory framework which I hope will be contained in the forthcoming AI Governance White Paper.
Some of the signs are good. The National AI strategy accepts the fact that we need to prepare for AGI and in the National Strategy too they talk too about
- public trust and the need for trustworthy AI,
- that Government should set an example,
- the need for international standards and an ecosystem of AI assurance tools
and in fact the Government have recently produced a set of Transparency Standards for AI in the public sector.
On the other hand
- Despite little appetite in the business or the research communities they are consulting on major changes to the GDPR post Brexit in particular the suggestion that we get rid of Article 22 which is the one bit of the GDPR dealing with a human in the loop and and not requiring firms to have a DPO or DPIAs
- Most recently after a year’s work by the Council of Europe’s Ad Hoc Committee on the elements of a legal framework on AI at the very last minute the Government put in a reservation saying they couldn’t yet support the document going to the Council because more gap analysis was needed despite extensive work on this in the feasibility study
- We also have no settled regulation for intrusive AI technology such as live facial recognition
- Above all It is not yet clear whether they are still wedded to sectoral regulation rather than horizontal
So I hope that when the White Paper does emerge that there is recognition that we need a considerable degree of convergence between ourselves the EU and members of the COE in particular, for the benefit of our developers and cross border business that recognizes that a risk based form of horizontal regulation is required which operationalizes the common ethical values we have all come to accept such as the OECD principles.
Above all this means agreeing on standards for risk and impact assessments alongside tools for audit and continuous monitoring for higher risk applications.That way I believe we can draw the US too into the fold as well.
This of course not to mention the whole Defence and Lethal Autonomous Systems space, the subject of Stuart Russell’s Second Reith Lecture which despite the promise of a Defence AI Strategy is another and much more depressing story!
Report of the Joint Committee on the Online Draft Bill: New offences and tighter rules
After 4 months work the Joint Select Committee on the Draft Online Safety Bill, with Damian Collins MP as our excellent Chair, has produced its report to generally favorable reviews
We have tried to ensure in our recommendations that the safety duties placed on Ofcom and the platforms are much clearer, by reference to existing and new, Law Commission recommended, offences and at the same time we do not infringe rights to freedom of expression. We have also recommended that paid for advertiing and online scams are included in regulated acctivity. We have recommended that it be made clear that all commercial pornography site be subject to the Age Appropriate Design Code aso that Age Assurance ensures that young people are are not subjected to unwanted online porn. We have also recommended that the role of Online Safety Ombudsman be established.
Here is the BBC summary
https://www.bbc.co.uk/news/technology-59638569
And here is the official parlimentary summary
https://ukparliament.shorthandstories.com/draft-online-safety-bill-joint-committee-report/index.html
The online world has revolutionised our lives
While the internet has created many benefits, underlying systems using data harvesting and microtargeted advertising have shaped the way we experience it.
Algorithms, invisible to the public, decide what we see, hear and experience. For some service providers, this means valuing the engagement of users at all costs, regardless of what holds their attention. This can result in amplifying the false over the true, the extreme over the considered, and the harmful over the benign.
The human cost of an unregulated internet can be counted in:
- mass murder in Myanmar
- intensive care beds full of unvaccinated covid-19 patients
- insurrection at the US Capitol
- teenagers sent down rabbit holes of content promoting self-harm, eating disorders and suicide.
The Online Safety Bill is a key step forward for democratic societies to bring accountability and responsibility to the internet.
We want the Bill to be easy to understand for service providers and the public alike. It should have clear objectives that lead into precise duties on the providers, with robust powers for the regulator to act when the platforms fail to meet those legal and regulatory requirements.
Online services should be held accountable for the design and operation of their systems and regulation should be governed by a democratic legislature and an independent Regulator—not Silicon Valley.
Four recommendations to strengthen the Bill
1. What's illegal offline should be regulated online
We agree that the criminal law should be the starting point for regulation of potentially harmful online activity, and that safety by design is critical to reduce its prevalence and reach.
A law aimed at online safety that does not require companies to act on, for example, misogynistic abuse or stirring up hatred against disabled people would not be credible. Leaving such abuse unregulated would itself be deeply damaging to freedom of speech online.
2. Ofcom should issue binding Codes of Practice
We recommend that Ofcom be required to issue a binding Code of Practice to assist providers in identifying, reporting on and acting on illegal content, in addition to those on terrorism and child sexual exploitation and abuse content.
As a public body, Ofcom's Code of Practice will need to comply with human rights legislation and this will provide an additional safeguard for freedom of expression in how providers fulfil this requirement.
3. New criminal offences are needed
We endorse the Law Commission's recommendations for new criminal offences in its reports.
The reports recommend the creation of new offences in relation to a number of harmful online activities.
We recommend that the Government bring in the Law Commission's proposed Communications and Hate Crime offences with the Online Safety Bill, if no faster legislative vehicle can be found. Specific concerns about the drafting of the offences can be addressed by Parliament during their passage.
4. Keep children safe from accessing pornography
All statutory requirements on user-to-user services, for both adults and children, should also apply to Internet Society Services likely to be accessed by children, as defined by the Age Appropriate Design Code. This would have many advantages.
In particular, it would ensure all pornographic websites would have to prevent children from accessing their content. Many such online services present a threat to children both by allowing them access and by hosting illegal videos of extreme content.
Ingredients for good higher education governance
I was recently asked by Advance HE to do a short piece reflecting on governance in the Higher Education Sector. It is reprinted in the Wilde Search publication Robust, Resilent and Ready: Assessing and Strengthening Governance in Charities and Education
https://www.advance-he.ac.uk/news-and-views/ingredients-good-higher-education-governance
https://www.wildsearch.org/pdf-robust-resilient-and-ready
Chairing a Higher Education institution is a continual learning process and it was useful to reflect on governance in the run up to Advance HE’s recent discussion session with myself and Jane Hamilton, Chair of Council of the University of Essex.
Governance needs to be fit for purpose in terms of setting and adhering to a strategy for sustainable growth with a clear set of key strategic objectives and doing it by reference to a set of core values. And I entirely agree with Jane that behaviour and culture which reflect those values are as important as governance processes.
But the context is much more difficult than when I chaired the School of Pharmacy from 2008 when HEFCE was the regulator. Or even when I chaired UCL’s audit committee from 2012. The OfS is a different animal altogether and despite the assurance of autonomy in the Higher Education Act, it feels a more highly regulated and more prescribed environment than ever.
I was a Company Secretary of a FTSE 100 company for many years so I have some standard of comparison with the corporate sector! Current university governance, I believe, in addition to the strategic aspect, has two crucial overarching challenges.
First, particularly in the face of what some have described as the culture war, there is the crucial importance of making, and being able to demonstrate, public contribution through – for example – showing that:
- We have widened access
- We are a crucial component of social mobility, diversity and inclusion and enabling life chances
- We provide value for money
- We provide not just an excellent student experience but social capital and a pathway to employment as well
- In relation to FE, we are complementary and not just the privileged sibling
- We are making a contribution to post-COVID recovery in many different ways, and contributed to the ‘COVID effort’ through our expertise and voluntary activity in particular
- We make a strong community contribution especially with our local schools
- Our partnerships in research and research output make a significant difference.
All this of course needs to be much broader than simply the metrics in the Research Excellence and Knowledge Exchange Frameworks or the National Student Survey.
The second important challenge is managing risk in respect of the many issues that are thrown at us for example
- Funding: Post pandemic funding, subject mix issues-arts funding in particular. The impact of overseas student recruitment dropping. National Security and Investment Act requirements reducing partnership opportunities. Loss of London weighting. Possible fee reduction following Augar Report recommendations
- The implications of action on climate change
- USS pension issues
- Student welfare issues such as mental health and digital exclusion
- Issues related to the Prevent programme
- Ethical Investment in general, Fossil Fuels in particular
- And, of course, freedom of speech issues brought to the fore by the recent Queen’s Speech.
This is not exhaustive as colleagues involved in higher education will testify! There is correspondingly a new emphasis on enhanced communication in both areas given what is at stake.
In a heavily regulated sector there is clearly a formal requirement for good governance in our institutions and processes and I think it’s true to say, without being complacent, that Covid lockdowns have tested these and shown that they are largely fit for purpose and able to respond in an agile way. We ourselves at Queen Mary, when going virtual, instituted a greater frequency of meetings and regular financial gateways to ensure the Council was fully on top of the changing risks. We will all, I know, want to take some of the innovations forward in new hybrid processes where they can be shown to contribute to engagement and inclusion.
But Covid has also demonstrated how important informal links are in terms of understanding perspectives and sharing ideas. Relationships are crucial and can’t be built and developed in formal meetings alone. This is particularly the case with student relations. Informal presentations by sabbaticals can reap great rewards in terms of insight and communication. More generally, it is clear that informal preparatory briefings for members can be of great benefit before key decisions are made in a formal meeting.
External members have a strong part to play in the expertise and perceptions they have, in the student employability agenda and the relationships they build within the academic community and harnessing these in constructive engagement is an essential part of informal governance.
So going forward what is and should be the state of university governance? There will clearly be the need for continued agility and there will be no let-up in the need to change and adapt to new challenges. KPI’s are an important governance discipline but we will need to review the relevance of KPIs at regular intervals. We will need to engage with an ever wider group of stakeholders, local, national and global. All of our ‘civic university’ credentials may need refreshing.
The culture will continue to be set by VCs to a large extent, but a frank and open “no surprises” approach can be promoted as part of the institution’s culture. VCs have become much more accountable than in the past. Fixed terms and 360 appraisals are increasingly the norm.
The student role in co-creation of courses and the educational experience is ever more crucial. The quality of that experience is core to the mission of HE institutions, so developing a creative approach to the rather anomalous separate responsibilities of senate and council is needed.
Diversity on the Council in every sense is fundamental so that there are different perspectives and constructive challenge to the leadership. 1-2-1s with all council members on a regular basis to gain feedback and talk about their contribution and aspirations are important. At Council meetings we need to hear from not just the VC, but the whole senior executive team and heads of school: distributed leadership is crucial.
Given these challenges, how do we attract the best council members? Should we pay external members? Committee chairs perhaps could receive attendance allowance type payments. But I would prefer it if members can be recruited who continue to want to serve out of a sense of mission.
This will very much depend on how the mission and values are shared and communicated. So we come back to strategic focus, and the central role of governance in delivering it!
Tim Clement-Jones
Peers Advocate the Value of Music Therapy for Dementia
Peers recently debated a question raised by crossbencher Baroness Greengross on what steps they intend to take to increase the use of art or music-based interventions in the care of people living with dementia.
I said that for more dementia patients to gain access to music therapy through social prescribing, there must be more training on the value of music for carers and healthcare practitioners and greater support for musicians to train as music therapists, and music education must be a much more mainstream part of primary and secondary school education. What assurance can the Minister give that the necessary government cross-departmental action is being taken to deliver on this?
The department itself is working closely with Music for Dementia and other organisations. Across government, we are looking at music, beyond just performance, to see how it can impact our lives and the role that it can have in levelling up and community cohesion, for example. Across government, I am sure that a number of departments are looking at this.
So some progress but not as firm on cross departmental action as many of us would like!
Lord C-J: Why We Should Support Baroness Kidron's Age Assurance Bill
I recently wound up a debate in the House of Lords on the Second Reading of Baroness Beeban Kidron's Age Assurance Private Member's Bill which is designed to set mandatory standards for age assurancace for internet sites and online platforms in the UK
Here is my speech explaining why I support this as an essential part of keeping children safe online.
My Lords, I add my thanks to those from all around the House to the noble Baroness, Lady Kidron, for introducing the Bill with such passion and commitment. We all know what an amazing campaigner she is. We hope that this is another stage in the end of—to adopt the powerful phrase of the noble Lord, Lord Cormack—the destruction of innocence.
It is a privilege to be winding up from these Benches, and to serve on the Joint Committee on the Draft Online Safety Bill along with the noble Baroness and the noble Lord, Lord Gilbert. Naturally, Members of the House have focused today on the importance of age assurance to child protection in terms of both safety and data protection. We should not tolerate the collateral damage from the online platforms—another powerful phrase, this time from the noble Baroness, Lady Kidron—as the cost of innovation.
A number of noble Lords, such as the noble Lord, Lord Russell, talked about cyberbullying, while the noble Baroness, Lady Bull, talked about the impact on body image and mental health in consequence. All around the House there are different motives for wanting to see proper standards for age assurance. I very much share what the noble Baroness, Lady Boycott, had to say about access to pornography having such a pernicious impact on young people’s relationships. At the same time, it was worth celebrating the genesis of Spare Rib. The word “empowerment” sprang to mind, because that underlies quite a lot of what we are trying to do today.
From my point of view, I cannot do better than quote the evidence from Barnardo’s to our Joint Select Committee to explain the frustration about what has been a long, winding and ultimately futile road over Part 3 of the Digital Economy Act, for which we waited for two years only to be told in 2019 that it was not going to be implemented. As Barnardo’s says:
“The failure to enact the original age verification legislation over three years ago has meant that thousands of children have continued to easily access pornography sites and this will continue unless the draft legislation is amended. Evidence shows (detailed later in the response) that accessing harmful pornography has a hugely damaging impact on children.”
As the noble Lord, Lord Lipsey, says, there should not be a second’s argument about restricting this. The evidence from Barnardo’s continues:
“The British Board of Film Classification survey in 2019 reported that children are stumbling upon pornography online from as young as seven. The survey also suggested that three-quarters of parents felt their child would not have seen porn online but more than half had done so.”
The noble Baroness, Lady Finlay, talked about a jungle of predators. The noble Lord, Lord Russell, even spoke of a pyramid of dung, which is an expression that was new to me today. The message from the noble Lord, Lord Gilbert, is clear: we need to be vigilant. It is astonishing that the online safety Bill itself does not tackle this issue. The noble Baroness, Lady Greenfield, put the whole impact of digital technology and social media on the brains of young people into perspective for many of us.
However, the crucial aspect is that the Bill is not about the circumstances in which age assurance or age verification is required but about the standards that must be adhered to. As the noble Baroness, Lady Kidron, said, we are talking about secure and privacy-protecting age assurance, proportionate to risk and with a route to redress. This is further testimony, from the NSPCC evidence to the Joint Select Committee:
“Given the intrinsic role of age assurance to deliver a higher standard of protection for children, the Government should set out further detail about how it envisages age assurance being implemented. Further clarification is required about if and when it intends to set standards for age assurance technologies. While the ICO intends to publish further guidance on age assurance measures later this year, it remains highly unclear what standards and thresholds are likely to apply.”
The process of setting standards started with the BBFC in preparing for the coming into effect of Part 3 of the Digital Economy Act. As the designated age-verification regulator, the BBFC published guidance on the kind of age-verification arrangements that would have ensured that pornographic services complied with the law. It rightly opted for a principles-based approach, as indeed does the Bill, rather than specifying a finite number of approved solutions, in order to allow for and encourage technological innovation within the age-verification industry. Sadly, that guidance was not implemented when the Government decided not to implement Part 3 of the DEA. As I say, this Bill adopts a similarly non-technology-prescriptive approach.
It is clearer than ever that, across a variety of news cases, we need a binding set of age-assurance standards. Again, I was taken by another phrase used by the noble Lord, Lord Gilbert: we need to create a positive place online. The noble Baroness, Lady Bull, talked about the digital world that children deserve. Why have the platforms not instituted proper age assessment already, as the noble Baroness, Lady Kennedy, asked? This should have been fixed a long time ago. As the noble Baroness, Lady Harding says, it is a multibillion-pound industry and, as the noble Lord, Lord Gilbert, said, they already know a huge amount about us. As the noble Baroness, Lady Finlay, said, the responsibility lies with the platforms, not young people, who are prone to addiction technology.
We can and should incorporate the requirements of the Bill into the online safety Bill, but we have an urgent need to make sure that these age-assurance standards are clear much earlier than that. The ICO set out in its guidance in October, Age Assurance for the Children’s Code—previously the age-appropriate design code—expectations for age-assurance data protection compliance. Ofcom’s Video-sharing Platform Guidance: Guidance for Providers on Measures to Protect Users from Harmful Material, also published in October, is even weaker. There is not even an expectation; it simply says:
“VSP providers may consider the following factors when establishing and operating age assurance systems”.
That is all they are, considerations and expectations. At present there are no sanctions attached to those requirements. It is clear that we need binding standards for age assurance to make these sets of guidance fully operative and legally enforceable.
The technology is there. The Government should adopt the Bill here and now—and ensure its passage before the end of this term. As the noble Baroness, Lady Harding, says, it lies in our hands. I hope noble Lords will resist the temptation to table amendments during the passage of the Bill. If they do resist that, we could avoid Committee stage and move rapidly towards Report and Third Reading, to make this Bill a reality.
As the right reverend Prelate said, children are at risk for the want of this Bill and we have the regulator in the wings.
As the noble Baroness, Lady Kidron, said, we could simply include this in the online safety Bill, but an 11 year-old will be an adult by 2024. As she asked, how many children will be harmed in the interim? The noble Lord, Lord Griffiths of Burry Port, said that watertight protection means mandatory standards for age assurance. This is a vital brick in the wall. The noble Lord, Lord Lipsey, said that he had heard every excuse from government for not implementing legislation—I think I am with him on that—and also said that this Bill is absolute proof against that. Let us give it the fairest wind we possibly can.