Lord C-J: We need more women in the STEM Workforce


AI and Copyright Lord C-J " The Government need to take this option off the table"

 

With huge thanks to Christian Gordon-Pullar for all his work here is our response to the Government's consultation on IP and Copyright. We are clear that there is no lack of clarity in UK copyright law that should allow technology companies to scrape the internet and use copyright material for training their AI models  without any recompense to creators and that we need to introduce clear rules requiring transparency of use and a better enforcement mechanism for. breaches of copyright.

I and my Liberal Democrat colleagues fully support the major campaign by the media, artists and the creative industries to demand that the government  take their preferred  option, of a text and data mining exeception requiring an opt-out, off the table and make sure that they ensure that one of the most valuable sectors in the British economy survives and thrives alongside AI.

Here is a link to the Consultation 

https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence

And here is our response

Response to Consultation:  AI and Copyright on behalf of Lord Clement-Jones and Christian Gordon-Pullar 

  1. Context for Response to Consultation

Use of AI clearly offers significant opportunities across the broad canvas of the United Kingdom’s creative industries, and abroad.  Creators and associated creative businesses are using AI technology to support creativity, the process of content production or to help personalise content.  AI clearly has many creative uses, as Sir Paul McCartney has emphasised. It is one thing, however, to use the technology but another to be at the mercy of it.

The Government consultation[3] itself begins with the sentence :

“Two major strengths of the UK economy are its creative industries and AI sector. Both are essential to drive economic growth and deliver the government’s Plan for Change.”

We support the policy objectives within  the consultation and in particular, at a high level, the three objectives set out in the Government consultation, in relation to AI and Copyright, namely:

  1. Supporting right holders’ controlof their content and ability to be remunerated for its use.
  2. Supporting the development of world-leading AI models in the UKby ensuring wide and lawful access to high-quality data.
  3. Promoting greater trust and transparencybetween the sectors.

It is incumbent on any Government to find a true and fair balance for authors, musicians, artists and all creative content creators and owners, not just for foreign and domestic tech and AI companies and tech entrepreneurs, at the expense of the giants on whose creative and historical works their success relies and on whose shoulders their business and technology stands.

The Ministerial foreword reinforces this:

“This consultation sets out our plan to deliver a copyright and AI framework that rewards human creativity, incentivises innovation and provides the legal certainty required for long-term growth in both sectors.”

It is unclear and remains unexplained - in the Consultation - why the Government states:

“AI firms have raised concerns that the lack of clarity over how they can legally access training data creates legal risks, stunts AI innovation in the UK and holds back AI adoption”

It is entirely unclear where or what lack of clarity is being referenced?  There is currently clarity and certainty in the Copyright regime in the United Kingdom and additionally the UK recognises Computer Generated Works (See para 51 of the Consultation).  In relation to copyright and Intellectual Property (IP), under the current law in relation to content ingestion by AI developers, consent must be secured for the use of rightsholders’’ content.  The Consultation appears to be creating the distinct impression that copyright owners should be concerned and this is creating uncertainty.

The Consultation also states:

The creative industries drive our economy, including TV and film, advertising, the performing arts, music, publishing, and video games. They contribute £124.8 billion GVA to our economy annually, they employ many thousands of people, they help define our national identity and they fly the flag for our values across the globe. They are intrinsic to our success as a nation and the intellectual property they create is essential to our economic strength

It is unclear however if, and to what extent, the Government has carried out any serious investigation into the financial impact on the creative industries in the preparation of this Consultation, or since its publication. It is however clear that the impact will be significant and very likely greater than the proposed benefits of the data centres and investments offered by Big Tech.

The estimate of benefits to the UK economy use by the AI Opportunities Plan is built on shaky foundations,. It is derived from Google's UK Economic Impact Report which highlighted that "AI-powered innovation could create over £400 billion in economic value for the UK economy by 2030. The £400 billion figure cited by Google comes from a report commissioned by Google and compiled by the consultancy firm Public First. This economic impact report was designed to analyse the potential effects of AI adoption on the UK economy by 2030.Public First conducted the research using several methods:

  • Polling of over 4,000 individuals across every region in the UK
  • Polling of 1,000 senior business leaders from small, medium, and large businesses across various industries
  • Traditional economic modelling to measure the economic activity driven by Google products.

The report estimates that AI-powered innovation could create over £400 billion in economic value for the UK economy by 2030, which is equivalent to an annual growth rate of 2.6%. 

This figure is based on projections of how AI technologies could boost productivity, create new job opportunities, and drive innovation across various sectors of the economy. It is important to note that this is a projection based on economic modelling and assumptions about future AI adoption and impact. As with any such forecast, it should be viewed as an estimate rather than a guaranteed outcome.

We remain convinced that the current copyright regime is clear and no evidence has been produced to warrant a new and more permissive exception regime to existing copyright laws in the United Kingdom.  It is our preferred option that the Government makes a clear statement that the use and/or ‘ingestion’ of content, without consent, to train an AI model capable of being used beyond non-commercial research, constitutes copyright infringement.

  1. Foreword/Summary

Questions surrounding the balance between copyright and data mining (text and data mining or TDM) is a major issue for content owners and creatives in the literary, musical and visual arts and not just in the UK, but around the world.

Getty and the New York Times are suing in the United States, so too many writers, artists and musicians and it was at the root of the Hollywood Actor and Writers strike last year .

Here in the United Kingdom, as the Government’s intentions have become clearer the temperature has risen. We have seen the creation of a new campaign -Creative Rights in AI Coalition (CRAIC) across the creative and news industries and Ed Newton-Rex six[4] raising over 30,000 signatories from creators and creative organisations. But with the current Consultation, we are now faced with a proposal regarding text and data mining exception which we thought was settled under the last Government. It starts from the false premise of legal uncertainty.

As the News Media Association says:

The government’s consultation is based on the mistaken idea—promoted by tech lobbyists and echoed in the consultation—that there is a lack of clarity in existing copyright law. This is completely untrue: the use of copyrighted content by Gen AI firms without a license is theft on a mass scale, and there is no objective case for a new text and data mining exception.

There is no lack of clarity over how AI developers can legally access training data. The applicable law in England and Wales is absolutely clear that commercial organisations – including Gen AI developers – must license the data they use to train their Large Language Models (“LLMs”).  Merely because AI platforms such as Stability AI  are resisting claims doesn’t mean the law in the UK is uncertain. There is no clear reason for – and no need for developers to - find ‘it difficult to navigate copyright law in the UK’.

AI developers have already, in a number of cases, reached agreement with between news publishers. OpenAI has signed deals with publishers like News Corp, Axel Springer, The Atlantic, and Reuters, offering annual payments between $1 million and $5 million, with News Corp’s deal reportedly worth $250 million over five years.

More recently, it is clear that the US fair use defence questions have not been settled despite the ruling in Thomson Reuters v. ROSS Intelligence, which involved Thomson Reuters suing ROSS Intelligence for using its copyrighted Westlaw headnotes to train an AI-powered legal research tool. On February 11, 2025, Judge Stephanos Bibas of the Delaware federal district court ruled against ROSS, rejecting its fair use defence and granting partial summary judgment in favour of Thomson Reuters. It is notable, however, that the court emphasised that ROSS’s use was commercial and non-transformative, as it created a competing product using the copyrighted material. This decision is significant as it sets a precedent for AI copyright cases, though it does not address generative AI specifically.

There can be no excuse of market failure. There are well established licensing solutions administered by a variety of well-established mechanisms and collecting societies. There should be no uncertainty around the existing law and the surrounding legal framework. We have some of the most effective collective rights organisations in the world.

The Consultation says that “The government believes that the best way to achieve these objectives is through a package of interventions that can balance the needs of the two sectors” The government appears to believe we need to achieve a balance between the creative industries and the tech industries. But the Consultation raises the fundamental question as to what kind of balance the government’s preferred option will deliver.

The government’s preferred option is to change the UK’s copyright framework by creating a text and data mining exception where rights holders have not expressly reserved their rights—in other words, an ‘opt-out’ system, where content is free to use unless a rights holder proactively withholds consent. To complement this, the government is proposing: (a) transparency provisions; and (b) provisions to ensure that rights reservation mechanisms are effective.

The government has stated that it will only move ahead with its preferred ‘rights reservation’ option if the transparency and rights reservation provisions are ‘effective, accessible, and widely adopted’. However, it will be up to Ministers to decide what provisions meet this standard, and it is clear that the government wishes to move ahead with this option regardless of workability, without knowing if their own standards for implementation can be met.

A few key overarching points to note:

  1. Although it is absolutely clear that that use of copyright works to train AI models is contrary to UK copyright law, the laws around transparency of these activities haven’t caught up. As well as using pirated e-books in their training data, AI developers scrape the internet for valuable professional journalism (even where such articles are protected by © Copyright notices and terms and conditions) and other media, in breach of both the terms of service of websites and copyright law, for use in training commercial AI models.
  2. At present, developers can do this without declaring their identity, or they may use IP scraped to appear in a search index for the completely different commercial purpose of training AI models.
  3. How can rights owners agree – in principle or in practice - to opt-out of something they don’t know full understand or even know about? AI developers will often scrape websites, or access other pirated material before they launch an LLM in public. This means there is no way for IP owners to opt-out of their material being taken before its inclusion in these models. Once used to train these models, the commercial value has already been extracted from the third party IP scraped, without permission, with no practical way to find or delete data from those models.
  4. The next wave of AI models responds to user queries by browsing the web to extract valuable news and information from professional news websites. This is known as Retrieval Augmented Generation-RAG. Without payment for extracting this commercial value, AI agents built by companies such as Perplexity, Google and Meta, will effectively free ride on the professional hard work of journalists, authors and creators. At present such crawlers are hard to block.

This is incredibly concerning, given that no effective ‘rights reservation’ system for the use of content by Gen AI models has been proposed or implemented anywhere in the world, making the government proposals entirely speculative.

As the NMA also say :

What the government is proposing is an incredibly unfair trade-off—giving the creative industries a vague commitment to transparency, whilst giving the rights of hundreds of thousands of creators to Gen AI firms. While creators are desperate for a solution after years of copyright theft by Gen AI firms, making a crime legal cannot be the solution to mass theft.[5]

We need transparency and clear statement about copyright. We absolutely should not expect artists to have to opt out. AI developers must: be transparent about the identity of their crawlers; be transparent about the purposes of their crawlers; and have separate crawlers for distinct purposes. Unless news publishers and the broader creative industries can retain control over their data – making UK copyright law enforceable – AI firms will be free to scrape the web without remunerating creators. This will not only reduce investment in trusted journalism, but it will ultimately harm innovation in the AI sector. If less and less human-authored IP is produced, tech developers will lack the high-quality data that is the essential fuel in generative AI.

Amending the applicable Law to address the challenges posed by AI development, particularly in relation to copyright and transparency, is essential to protect the rights of creators, foster responsible innovation, and ensure a sustainable future for the creative industries.

This should apply regardless of which country the scraping of copyright material takes place if developers market their product in the UK, regardless of where the training takes place.

It will also ensure that AI start-ups based in the UK are not put at a competitive disadvantage due to the ability of international firms to conduct training in a different jurisdiction. It is clear that AI developers have used their lobbying clout to persuade the government that a new exemption from copyright - in their favour - is required.

In response we will be vigorously opposing the preferred option for a new text and data mining exemption with an opt-out and will be seeking to ensure that the government answers the following key questions before proceeding further

  1. What led the government to do a u-turn on the previous government’s decision to drop the text and data mining exemption it proposed?
  2. What estimate of the damage to the creative industries it has made of implementing its clearly favoured option of a TDM plus opt out given there is no robust economic assessment currently in existence
  3. Is damaging the most successful UK economic sector for the benefit of US AI developers what it means by balance?
  4. Why it has not included the possibility of an opt in to a TDM in its consultation paper options?
  5. What examples of successful workable opt outs or rights reservation from TDM’s can it draw on particularly for small rights holders? What research has it done? the paper essentially admits that effective technology is not there yet. Isn’t it clear that the EU opt out system under the Copyright Directive has not delivered clarity?
  6. What regulatory mechanism if any does the government envisage if its proposal for a TDM with rights reservation/opt out is adopted? How are creators going to be sure any new system would work in the first place?

Detailed Response below

  1. Response to Consultation
  1. Copyright – Text and Data Mining

The three stated objectives in the Consultation[6] are set out in para / section 54 of the Consultation:

  1. Supporting right holders’ controlof their content and ability to be remunerated for its use.
  2. Supporting the development of world-leading AI models in the UKby ensuring wide and lawful access to high-quality data.
  3. Promoting greater trust and transparencybetween the sectors.

The Government rightly believe that there is a need to promote and further enable AI development. This must however be balanced with a commensurate and proportionate recognition of the critical importance and value of data as raw material.  AI developers rely on high-quality data to develop reliable and innovative AI-driven inventions and applications. Licensing regimes under existing IP law are designed to cater for the needs of AI developers.

By the same token content and data-driven businesses themselves have seen a rapid increase in the use of AI technology and machine-learning, either for news summaries, data gathering efforts, translations for research and journalistic purposes or to assist organisations to save time by processing large amounts of text and other data at scale and speed.   Digital technologies, including AI, are and will continue to be of critical importance to these industries, helping create content, new products and value-added services to deliver to a broad range of corporate and retail clients. Whether in news media or cross-industry research, publishers are themselves investing in AI; continued collaboration with start-ups and academia are creating tailored materials for wide populations of beneficiaries (students, academia, research organisations, and even marketers of consumer publishing products).

It is of paramount importance to balance the needs of future AI development with the legal, commercial and economic rights of copyright and data-owners and the need to incentivize new AI adoption with recognition of the rights of – and remuneration for - existing content owners.

We have however seen no evidence the existing copyright legislative framework fails to adequately address the current needs of AI developers. Moreover it is particularly important, in our view, to ensure that the development of AI is not enabled at the expense of the underlying investment by copyright and data-owners. (see endnote 1).

If the content owners of underlying data materials withhold the licensing of, or access to, such materials or attempt to price them at a level that is unfair, the answer is for Government via the Competition and Markets Authority/the new Digital Markets Unit (or indeed other regulators who form part of the Digital Regulation Cooperation Forum) to put in place competition measures to ensure there is a clear legal recourse in such situations.

In summary we do not believe that current copyright law creates a disparity between the interests of AI developers and investors and content owners. The existing copyright regime under the CDPA reflects a balance that fairly protects those investing in data creation without giving an unfair advantage to technology companies offering AI-enabled content creation services. In particular the current framework provides a balanced regime for data and text mining and we believe no changes are required at present.

At the very least, if AI Operators and providers must be able to demonstrate transparency and provide users and regulators with access to clear records of the inputs that the AI technology has used (e.g. sources of content includes copyrighted content), it will be impossible to satisfy the UK regime as well as basic international standards on cybersecurity standards, let alone copyright infringement or applicable parallel imports laws, to satisfy UK sovereignty principles.

In order, RESPONSES below.

Section C1

  1. Question 1.Do you agree that option 3 is most likely to meet the objectives set out above?

NO, we do not agree. 

  • Creating a more permissive system of copyright is unlikely to incentivise AI developers to obtain consent or license content from rightsholders.
  • AI developers have shown little appetite to license content at scale and there have been no signals, from what we have seen, that that position would change under any new regime. In the EU, which introduced a new Text and Data Mining (TDM) Exception with an Opt-Out (before the explosion in AI development) there has been no material increase in licensing of content, demonstrating that it is not the law which is preventing such licensing.
  • As currently drafted, the Consultation contains a new exception would also be available to all users, not simply AI developers for training. This would mean any user could copy works and reproduce them for commercial gain unless those rights were reserved. This presents the distinct opportunity for some unscrupulous users to deliberately look for works that are not rights reserved to exploit them commercially which is not possible under the existing copyright system
  1. Question 2.Which option do you prefer and why?

Ranking Options in order:

  • We would therefore urge the Government to elect Option 0 – Make no legal change.   No other option is currently justifiable given the lack of evidence of an adverse commercial environment preventing access to data or text by AI-enabled content creators. Should the Government or IPO consider that there needs to be increased access to data at lower cost, it should look at otherpolicy levers to stimulate such uptake, such as providing tax incentives for content owners to license content, rather than reducing copyright protection.
  • We also concur with industry leads who consider that forcing rightsholders to opt in to protection, or opt out of a data mining exception - as suggested in Option 3 – would be complicated and costly for many businesses and industries who own literally millions of works, when licensing is far simpler, and would be against the spirit of international treaties on copyright
  • Further such changes would impact the rights of copyright owners, as enshrined in Article 1 of the Human Rights Act . The Human Rights Act 1998 incorporates rights contained in the European Convention on Human Rights (ECHR) into UK national law. This means that they can be used to challenge the actions and decisions of governments and public bodies in the UK courts.  Under the UK Human Rights Act 1998, intellectual property rights are protected as part of the broader “right to property” enshrined in Article 1 of the First Protocol, meaning that public authorities cannot interfere with your intellectual property without a legitimate legal reason and in the public interest; this includes patents, trademarks, copyrights, and other forms of intellectual property you may own

Article 1 of the First Protocol states:.

“Every natural or legal person is entitled to the peaceful enjoyment of his possessions. No one shall be deprived of his possessions except in the public interest and subject to the conditions provided for by law and by the general principles of international law.

 

The preceding provisions shall not, however, in any way impair the right of a State to enforce such laws as it deems necessary to control the use of property in accordance with the general interest or to secure the payment of taxes or other contributions or penalties.”

Possessions include any tangible and intangible property

While the Act protects intellectual property, it does allow for limitations in the public interest, meaning that the government can restrict intellectual property rights under certain circumstances if it is deemed necessary for the greater good.  This is clearly for the benefit of tech and AI companies, not the greater good of content owners and creative industries across the fields of literary, musical and visual arts, inter alia.

Section C1 (cont.)

Question 3. Do you support the introduction of an exception along the lines outlined above?

RESPONSE: No, this is not necessary under UK law as the copyright owner already holds such rights, and such an exception would not be effective.

Absent a licence, or consent in writing, such rights to control his/her/its copyright are reserved for the copyright owner and no use of that copyright is permitted (except under existing non-commercial research exceptions for academic research, inter alia).  Any such unauthorised use would constitute copyright infringement.

Question 4. If so, what aspects do you consider to be the most important? If not, what other approach do you propose and how would that achieve the intended balance of objectives?

RESPONSE:  Only applicable if Option 3 is the  eventual outcome.  If such an approach for Option 3 were in fact the outcome at the end of the consultation, a presumption (as per existing UK law) should exist that no content is automatically permitted for TDM use by AI/Tech companies or other third parties, and that would be the case even in case where content available publicly or otherwise does not have a text of machine readable opt-out language.  The presumption must be in favour of the content and copyright owner (else risks creating costly litigation for SMEs and individuals who cannot reasonably be expected to allocate funds to litigate foreign and domestic tech companies and other well-funded tech start-ups seeking to use content without consent.

Any new exception would also have to be narrowly drafted to ensure it is limited to AI training, to ensure ill-intentioned users do not exploit the new system to reproduce works for commercial gain outside of the AI environment.

Question 5.  What influence, positive or negative, would the introduction of an exception along these lines have on you or your organisation? Please provide quantitative information where possible.

RESPONSE:  Any new exceptions would adversely impact creative industries both operationally and financially  – as seen from feedback and publications and statements made by the Performing Rights Society [7](PRS)[8], Anti-Copying in Design (ACID[9]) and others.  (See footnotes for references).

Content owners would have to spend time and money on legal advice, potentially, to:

  • Embed Metadata and Watermarks - Add metadata to digital files to indicate copyright ownership and usage restrictions. Watermarks could deter unauthorised use if a robust and easily useable form was readily available. Embedding metadata could be relatively simple and could be done using file properties, specialised software or programming methods (e.g., EXIF for images, or custom fields in JSON or XML) See Appendix 1
  • Monitor and Enforce Their Rights
    Content owners would have to regularly check for unauthorised use of their copyright work online. If an owner identifies infringements, they would need to contact the offending party to request removal or seek legal advice. However, identifying the offending party remains a significant challenge without a proper system in place in terms of transparency requirements..

For example, A photographer would have to retrospectively opt-out thousands of individual works to gain protection which is currently automatic, time that they can ill-afford to spend which detracts from their valuable time, better spent generating new revenue-generating copyright-protected works. Legal costs would like increase – to challenge infringement - but under a new regime there would have to be a dual track for action, one under the new regime and another under the existing regime, potentially doubling legal costs.

Question 6. What action should a developer take when a reservation has been applied to a copy of a work?

RESPONSE:  The developer must seek consent and pay for the content before training AI or technology systems on the content and without such consent would not / should not train its AI or technology on such content .  This applies equally today under the existing law – and most companies ignore such rights because they are not enforced and the consequences are too financially burdensome for content owners – hence the rights should be bolstered not diluted.

Question 7. What should be the legal consequences if a reservation is ignored?

RESPONSE:  Any new system for rights reservation must have the same legal standing as Technical Protection Measures.  That is sub-optimal in any event.  We propose that a statutory strict liability should be imposed and a presumption of copyright infringement should apply in case where use is without consent/licence.

Question 8. Do you agree that rights should be reserved in machine-readable formats? Where possible, please indicate what you anticipate the cost of introducing and/or complying with a rights reservation in machine-readable format would be.

RESPONSE:  No:  any such system should be sufficiently flexible to enable different content owners to opt out for types of works. While machine readable formats would most likely be required, these must be simple and low cost enough for all rightsholders to access; without this, such measures place the burden on the content owners to spend money to defend copyright and IP protection, rights that are fundamentally embodied in existing law and rights already held under the Human Rights Act 1998. 

Section C2: Technical Standards

Question 9. Is there a need for greater standardisation of rights reservation protocols?

RESPONSE:   If required at all, standardisation of protocols and standards for such protocols would seem helpful.

Question 10. How can compliance with standards be encouraged?

RESPONSE:  Infringement or breach of any such protocols would need to be clearly stated to constitute copyright infringement with deterrents in place to create a compliant legislative regime.  In the absence of such protocols, a statutory strict liability should be imposed or a presumption of copyright infringement should apply

Question 11. Should the government have a role in ensuring this and, if so, what should that be?

RESPONSE:  Establish a Government regulator or unit to enforce such rights, to be paid for by the tech industry – which is demanding additional rights, which derogate from the rights of copyright and IP owners, which already exist under existing UK Copyright legislation and under the Human Rights Act 1998.

Section C3 – Licensing and contracts

Question 12. Does current practice relating to the licensing of copyright works for AI training meet the needs of creators and performers?

RESPONSE:  Currently the licensing regime does not expressly address licensing for AI training but if AI training entities should apply the existing legal principles under the existing Law and therefore actually check copyright notices and apply for licensing /consent where no other approach is available.

Question 13. Where possible, please indicate the revenue/cost that you or your organisation receives/pays per year for this licensing under current practice.

RESPONSE:  n/a from the authors

Question 14. Should measures be introduced to support good licensing practice?

RESPONSE: There is no presumption that commercial AI training or use of inputs is permitted under UK copyright law and rights-management societies and professional bodies including PRS and other licensing organisations already provide for such good licensing practices and may therefore need to update those for use by AI etc -

See https://www.prsformusic.com/  and also https://www.gov.uk/licence-to-play-live-or-recorded-music  and ICO for film licensing – at https://www.independentcinemaoffice.org.uk/advice-support/what-licences-do-i-need/film-copyright-licensing/ and ICMP for Contemporary Music https://www.icmp.ac.uk/blog/understanding-music-copyrights-and-licenses

Question 15. Should the government have a role in encouraging collective licensing and/or data aggregation services? If so, what role should it play?

RESPONSE:  No - this should be left to professional collection societies and licensing bodies authorised by each industry but the Government could, as an alternative to the preferred approach of robust enforcement, assist content owners by making any unauthorised use enforceable as a statutory liability, or create a presumption of infringement if that is not already clear (it seems clear to the authors)

Question 16. Are you aware of any individuals or bodies with specific licensing needs that should be taken into account?

RESPONSE:  n/a

Section C4 – Transparency

Question 17. Do you agree that AI developers should disclose the sources of their training material?

RESPONSE  YES.  Transparency is vital to the AI eco-system.  We advocate for transparency, by which we intend that AI developers must maintain records of the individual works that their AI systems etc. have ingested at a granular level.

Question 18. If so, what level of granularity is sufficient and necessary for AI firms when providing transparency over the inputs to generative models?

RESPONSE :  As with current Law – the source, author and detail of data / content used and whether it is used under licence or not.  Granularity is crucial – a general statement would not be sufficient to protect the principles of transparency nor to protect creator’s rights under the Law.

Question 19. What transparency should be required in relation to web crawlers?

RESPONSE:  We should retain the amendments to the Data Use and Access Bill in this respect proposed by Baroness Kidron and passed by the House of Lords on the 28th of January 2025 which provide inter alia for regulations to require disclosure by AI models of

  • the name of the crawler,
  • the legal entity responsible for the crawler,
  • the specific purposes for which each crawler is used,
  • the legal entities to which operators provide data scraped by the crawlers they operate, and
  • a single point of contact to enable copyright owners to communicate with them and to lodge complaints about the use of their copyrighted works.
  • the URLs accessed by crawlers deployed by them or by third parties on their behalf or from whom they have obtained text or data,
  • the text and data used for the pre-training, training and fine-tuning, including the type and provenance of the text and data and the means by which it was obtained,
  • information that can be used to identify individual works, and
  • the timeframe of data collection.

Question 20.What is a proportionate approach to ensuring appropriate transparency?

RESPONSE:  Unclear but it must at least involve an equal or greater effort by AI and tech developers using AI to scrape content as is being considered for content owners who have to add tech measures to their content e.g. watermarks etc and notices in machine readable format for opt outs and/or further technical, legal and operational costs to craft disclaimers or text for assertion of their (already existing) rights.

Question 21. Where possible, please indicate what you anticipate the costs of introducing transparency measures on AI developers would be.

RESPONSE: Unclear at this stage but perhaps the Government can broker – as part of its incentive deals– a framework to resolve past copyright infringement issues, to obviate the need for class actions by creative content owners or individuals, a one-off settlement/payment for past copyright infringement

Question 22. How can compliance with transparency requirements be encouraged, and does this require regulatory underpinning?

RESPONSE:  If Option 3 is adopted then it must be a condition for tech developers and AI companies, at least, to take all reasonable operational measures to ensure that copyright and content is licensed or its input and output use is authorised (under license or written consent), such efforts to be at least equal or greater than the efforts being likely considered for content owners (who have to add tech measures to their content e.g. watermarks etc and notices in machine readable format for opt outs and/or further technical, legal and operational costs to craft disclaimers or text for assertion of their (already existing) rights)

Question 23. What are your views on the EU’s approach to transparency?

RESPONSE:  It is very questionable, to say the least, how effective or workable the Working Groups implementing the EU AI ACT have found the opt out provisions; in the meantime, the transparency provisions is a clear benchmark for the UK and it should take note, given that until recently UK was bound by such rules.  The law in UK should at least equally protect UK citizens and content and creative owners - – but not impose unworkable opt out mechanisms based on an as-yet-untested EU comparison - to promote consistency and to avoid a mass migration of creatives.

Section C5 : Clarification of Copyright Law

Question  24. What steps can the government take to encourage AI developers to train their models in the UK and in accordance with UK law to ensure that the rights of right holders are respected?

RESPONSE:   See above responses to Q20 and Q22 – and reiterated here.  A statutory strict liability should be imposed or a presumption of copyright infringement should apply, failing which, the Government should make a clear statement, in the form of a Copyright Notice, that the current exception regime does not allow for the use of works, covered by copyright, for commercial purposes, without the consent of the owner of those works.

Section C6

Question  25. To what extent does the copyright status of AI models trained outside the UK require clarification to ensure fairness for AI developers and right holders?

RESPONSE:  If an AI company has trained its AI on content that is covered by copyright in the United Kingdom, then making the output or service provided by that company in the United Kingdom would still constitute copyright infringement.

At the very least, if AI Operators and providers are unable to demonstrate transparency and provide users and regulators with access to clear records of the inputs that the AI technology has used (e.g. sources of content includes copyrighted content), it will be impossible to satisfy the UK regime as well as basic international standards on cybersecurity standards, let alone copyright infringement or applicable parallel imports laws, in order to satisfy UK sovereignty principles.

Question 26. Does the temporary copies exception require clarification in relation to AI training?

RESPONSE:  No, this is no defence; it is also no different to existing approach taken by any computer (an AI is just a software programme and no different to existing technologies, for now)

Question 27. If so, how could this be done in a way that does not undermine the intended purpose of this exception?

RESPONSE:  We are not in favour of any exception but if such an exception were to be considered, then clear guardrails would need to be implemented – to ensure that any such temporary copies create no economic value or advantage.

Section C6 - Encouraging Research and Innovation

Question  28. Does the existing data mining exception for non-commercial research remain fit for purpose?

RESPONSE:  YES, it is sufficient and fit for purpose, as it currently stands[10]   The Exception received significant Parliamentary scrutiny before being implemented in 2014 and we believe any reform would significantly change the careful balance agreed upon then. Any such reform of the Exception would require significant and separate analysis, as opposed to being mixed in with this consultation.

Question 29. Should copyright rules relating to AI consider factors such as the purpose of an AI model, or the size of an AI firm?

RESPONSE:  No.  All such instances and use of copyright content are still governed by the existing UK Copyright legislation and the size of purpose of the firm is irrelevant (unless perhaps it is a true charity not a charitable front designed by and for a commercial purpose).

Section D - Computer-generated works: protection for the outputs of generative AI

Option 0: No legal change, maintain the current provisions

RESPONSE:  Maintain the status quo.

  • Computer Generated Works (CGWs) distinguish the UK from other countries and prevents the argument that AI needs to ‘own’ IP outside of the existence of a ‘human author’ for creativity – it does not. AI is a tool in the hands of a company or individual.
  • CGWs protection is necessary to encourage the production of outputs by generative AI or other tools, and any legal ambiguity is likely to be resolved or of little effect. The Courts will resolve any ambiguity as they have done in England and Wales for centuries.
  • The exception in s9(3) CDPA works. “If a work is computer-generated – that is, not authored by a human – then copyright ought to be vested in the person who made the 'arrangements necessary for the creation of work”
  • AI does not require or deserve any special rights or considerations and such rights are adequately covered in the relevant S.9(3) of the CDPA: .

Section D2 - Outputs

Question 30. Are you in favour of maintaining current protection for computer-generated works? If yes, please explain whether and how you currently rely on this provision.

RESPONSE :  YES:  See above re Computer Generated Works expressly that these distinguish the UK from other countries where such a regime does not exist.

Question 31. Do you have views on how the provision should be interpreted?

RESPONSE:  It has been clearly interpreted in case law. The Advocate General in Painer[11] took this view, noting that only human creations can be copyright- protected (although the human can employ a “technical aid” like a camera). A similar position has also been taken by the U.S. Copyright Office, which determined that images created using the generative AI model, Midjourney, were not original works of authorship protected by U.S. copyright law because this excludes works produced by non-humans[12]. Caselaw from other countries also reflects this understanding[13].  It is right and proper that the facts of each case should determine the outcome, as was Parliament’s intention[14]. 

RESPONSE:  No changes to CGWs are required

Question 32. Would computer-generated works legislation benefit from greater legal clarity, for example to clarify the originality requirement? If so, how should it be clarified?

RESPONSE: No.

Question 33. Should other changes be made to the scope of computer-generated protection?

RESPONSE:  No

Question 34. Would reforming the computer-generated works provision have an impact on you or your organisation? If so, how? Please provide quantitative information where possible.

RESPONSE:  unknown until details are provided of what the changes would be in a legislative context and the authors consider this unnecessary

Question 35. Are you in favour of removing copyright protection for computer-generated works without a human author?

RESPONSE:  NO, for reasons given above.  UK is fortunate to have a CGW right which is absent in many legislative frameworks

Question 36. What would be the economic impact of doing this? Please provide quantitative information where possible.

RESPONSE:  Unknown at yet

Question 37. Would the removal of the current CGW provision affect you or your organisation? Please provide quantitative information where possible.

RESPONSE:  Almost certainly given the licensing arrangements and revenue based on existing legislation. Quantum unknown.

Section D4

Question 38.  Does the current approach to liability in AI-generated outputs allow effective enforcement of copyright?

RESPONSE:  The law is clear in relation to AI-generated outputs.  If a service is being provided in the UK which has been trained on the use of UK material, without permission, then the service is infringing and operating illegally. The enforcement of the law is clearly challenging given the lack of transparency by AI developers of the works they have used to train their models and for what purpose. See above proposals on strict liability regime for AI companies infringing copyright and alternative enforcement mechanisms mentioned in previous responses, above.

Question 39.  What steps should AI providers take to avoid copyright infringing outputs?

RESPONSE:  comply with the law –

  • check copyright notices (which is easy with AI tools) and
  • obtain consent under licence or written permission to use substantial elements of content in which copyright subsists and is claimed and/or owned by a third party under a simple © Notice.

Section D5 - AI Output Labelling

Question 40. Do you agree that generative AI outputs should be labelled as AI generated? If so, what is a proportionate approach, and is regulation required?

RESPONSE:  YES and YES

Question 41. How can government support development of emerging tools and standards, reflecting the technical challenges associated with labelling tools?

RESPONSE:  Unclear,  the labelling is easy with AI and tech tools

Question 42. What are your views on the EU’s approach to AI output labelling?

RESPONSE:  n/a  No comment.  The EU AI Act, formally adopted by the EU in March 2024, requires providers of AI systems to mark their output as AI-generated content. This labelling requirement is meant to allow users to detect when they are interacting with content generated by AI systems to address concerns like deepfakes and misinformation. Unfortunately, implementing one of the AI Act’s suggested methods for meeting this requirement—watermarking—may not be feasible or effective for some types of media. As the EU’s AI Office begins to enforce the AI Act’s requirements, the Government should closely evaluate the practicalities of AI watermarking.

Section D6: Digital Replicas and other issues

Question 43. To what extent would the approach(es) outlined in the first part of this consultation, in relation to transparency and text and data mining, provide individuals with sufficient control over the use of their image and voice in AI outputs?

RESPONSE:  This is an important area that requires a more detailed review of the effectiveness of UK laws.  Moral rights and personality image rights such as exist in EU would help protect individuals to have adequate control over their image/reputation and performance.  This is an area that needs further review and potentially, legislation.  Ratification of international treaties on this topic such as the Beijing Treaty would be an important first step towards international cooperation on standards and enforcement frameworks.

There are significant limits on the control people have over their image and voice in the UK.  To the extent image (or personality) rights are protected at all, it is via a mix of privacy law, data protection, contract law, moral rights and the common law tort of ‘passing off’.  The approaches outlined in the first part of the consultation do not materially improve individuals’ position in relation to use of their image and voice in AI outputs. It is directed to the use of copyright works. It does not follow that a copyright work is directly probative of a person’s image and/or voice. Further, it does not follow that the owner of that copyright work is the person in question.

Question 44. Could you share your experience or evidence of AI and digital replicas to date?

RESPONSE:  The ability of digital replicas in real time can cause and have caused irreparable damage to many including people we know who have been fooled by sophisticated AI scams and with real-time artificial intelligence replicas of real people, actors well then personalities and even family members, easily cloned from information available on social media and images shared on the Internet, can cause irreparable damage to individuals who may be ill prepared or ill-equipped to address these – and those in the public arena (including actors and artists or politicians, even) may suffer financial harm as well as reputational damage.

There have also been examples of deepfake videos of politicians in recent times in the UK- for example of Sadiq Khan and Sir Keir Starmer.  A change in the law to explicitly cover acts like these, rather than leaving recourse only to adjacent rights such as defamation or passing off would, in our view, be advisable.

Section D7 – Emerging Issues

Question 45. Is the legal framework that applies to AI products that interact with copyright works at the point of inference clear? If it is not, what could the government do to make it clearer?

RESPONSE:  No comment – question unclear

Question 46. What are the implications of the use of synthetic data to train AI models and how could this develop over time, and how should the government respond?

RESPONSE:  It is likely the outputs and quality of AI tools trained on synthetic data models will be degraded as compared to original/real data models

Question  47. What other developments are driving emerging questions for the UK’s copyright framework, and how should the government respond to them?

RESPONSE:  None, at present. 

Section E

  1. End notes
  • Lord Clement-Jones CBE[15] is a Liberal Democrat Life peer and the Liberal Democrat DSIT Spokesperson in the House of Lords, and inter alia, the Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence. He was chair of the House of Lords Select Committee on Artificial Intelligence (2017–2018) and is a former member of the Select Committees on Communications and Digital (2011–2015)) as well as a former Lib Dem Lords spokesperson on the Creative Industries (2004-10). He is an officer and active member of the All-Party Parliamentary Group on Intellectual Property.
  • Christian Gordon-Pullar is an IP specialist and an experienced intellectual asset manager with more than 30 years’ experience, ranked in the IAM Top 300 Global IP Strategists in 2020- 2024 (inclusive). He has a proven track record in IP in the fields of financial services, pharmaceuticals and life sciences, fintech and e-commerce, working at a C level with venture capital and private equity firms across portfolios. Until August 2024, Christian was Chairman of Fox Robotics Ltd, a UK Agritech AI start up.  He has led IP licensing efforts in multinationals across Europe and Asia. Based in Singapore from 2001 to 2019, he also has significant Asia experience where he was head of Tech, Intellectual Property and Corporate Functions Legal, AsiaPac at JPMorgan. Before that, he was global head of intellectual property at Standard Chartered Bank and CEO of Standard Chartered’s global IP licensing entity. [16]  Christian was formerly a solicitor in the IP Group (TMT) at Lovell White Durrant, now Hogan Lovells, from 1993-1999.
  1. Consent. The individuals named above would be agreeable to being contacted by the Intellectual Property Office (UK IPO) in relation to this consultation.

APPENDICES  

  1. Watermarking

Watermarking of copyright content for LLMs is an active area of research and discussion, with several approaches being explored to address copyright concerns in AI training and generation. While watermarking shows promise, its practicality for preventing copyright theft is still strongly debated.

  • Embedding Watermarks: Researchers have proposed methods to implant backdoors on embeddings, such as the Embedding Watermark method3. This technique aims to protect the copyright of LLMs used for Embedding as a Service (EaaS) by inserting watermarks into the embeddings of texts containing trigger words.
  • Output Watermarking: Some techniques focus on watermarking the text generated by LLMs. These methods can significantly reduce the probability of generating copyrighted content, potentially by tens of orders of magnitude4.
  • Model-Level Watermarking: A novel approach involves embedding signals directly into LLM weights, which can be detected by a paired detector. This method allows for watermarked model open-sourcing and can be more adaptable to new attacks.
  • Reinforcement Learning-Based Watermarking: A co-training framework using reinforcement learning has been proposed to iteratively train a detector and tune the LLM to generate easily detectable watermarked text while maintaining normal utility[17].

While watermarking shows potential, several factors affect its practicality in preventing copyright theft:

  1. Effectiveness: Some studies demonstrate that watermarking can significantly reduce the likelihood of generating copyrighted content4. However, the effectiveness varies depending on the specific method and implementation.
  2. Detection Challenges: Detecting watermarks in fully black-box models remains difficult. Some methods, like DE-COP, have shown promise in detecting copyrighted content in training data, even for black-box models6.
  3. Trade-offs: There's an inherent trade-off between watermark transparency and effectiveness. Increased transparency may make watermarks more detectable and modifiable9.
  4. Implementation Constraints: Watermarking during the LLM training phase cannot be applied to already trained models, limiting its applicability to existing LLMs[18].
  5. Legal and Ethical Considerations: The use of copyrighted material in training datasets remains a contentious issue, with ongoing legal debates and lawsuits.

In conclusion, while watermarking techniques for LLMs are advancing rapidly, their practicality in preventing copyright theft is still uncertain. These methods show promise in reducing the generation of copyrighted content and potentially tracking its use, but challenges remain in implementation, detection, and legal frameworks. As the field evolves, a combination of technical solutions, legal guidelines, and ethical considerations will likely be necessary to address copyright concerns in AI effectively.

  1. EU Transparency requirements

The EU AI Act requires a “sufficiently detailed summary” of training data for General-Purpose AI (GPAI) models to ensure transparency and protect stakeholders’ rights, such as copyright holders. The required level of granularity includes:

  1. Data Sources and Types: Providers must disclose the origins of datasets (e.g., public or private databases, web data, user-generated content) and specify the types of data used (e.g., text, images, audio) across all training stages, from pre-training to fine-tuning.
  2. Content Description: Summaries must detail dataset size, filtering processes (e.g., removal of harmful content), augmentation methods, and whether copyrighted or personal data is included. This also involves specifying licensing terms for the data.
  3. Narrative Explanations: Clear, non-technical descriptions must accompany technical details to ensure accessibility for both experts and laypersons.

This level of detail is designed to balance transparency with the protection of trade secrets while enabling stakeholders to exercise their rights effectively

[1] See Section C for details.

[2] See Section C for details.

[3] https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence

[4] https://ed.newtonrex.com/

[5] https://www.lordclementjones.org/2024/12/21/governments-ai-copyright-consultation-is-selling-out-to-the-techbros/

[6] https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence

[7] https://www.prsformusic.com/m-magazine/news/prs-for-music-announces-ai-principles

[8] https://www.prsformusic.com/press/2024/creative-rights-in-ai-coalition-calls-on-government-to-protect-copyright

[9] https://m.facebook.com/100063658326152/photos/1084206480377953/

[10] The Post Implementation Review Process, published in 2020 found (in relation to the series of exceptions introduced in 2014), the review has not identified any improvements in the assumptions which would change the original assessment. Based on the largely positive responses from the call for evidence that the original objectives remain valid, and evidence to suggest the exceptions are operating as intended, we find that it would therefore be appropriate for the exceptions to remain in their current form.  See https://www.legislation.gov.uk/uksi/2014/1372/pdfs/uksiod_20141372_en_002.pdf

[11] Eva-Maria Painer v Standard Verlags GmbH (C-145/10) C:2011:798 at [89]–[94] at [121]

[12] Second Request for Reconsideration for Refusal to Register Théâtre D’opéra Spatial (Copyright Review Board September 5, 2023). U.S. Copyright Office, Library of Congress. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 16 March 2023 88 FR 16190.

[13] Australia: it is necessary to identify a human author in order for there to be an original literary work (Telstra Corporation Limited v Phone Directories Company Pty Ltd (2010) FCA 44); Singapore: copyright only arises when a work is created by a human author (Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd [2011] SGCA 37

[14] Bently et al, Intellectual Property Law, 6th Edn at [138].

  1. UK Intellectual Property Office, "Consultation outcome—Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultation" (GOV.UK, updated 28 June 2022)

[15] https://www.libdems.org.uk/tim_clement_jones

[16] https://www.iam-media.com/strategy300/individuals/christian-gordon-pullar

[17] https://openreview.net/forum?id=r6aX67YhD9

[18] https://arxiv.org/html/2501.02446v1


Government's AI Copyright Consultation is Selling out to the Techbros

We have recently seen the publication of the Government's Copyright and AI Consultation paper. This my take on it.

I co-chair the All Party Parliamentary Group for AI and chaired the AI select Committee committee and wrote a book earlier this year on AI regulation. Before that I had a career as an lawyer defending copyright and creativity and in the House of Lords, I’ve have been my Party’s creative industry spokesperson. The question of IP and AI absolutely for me is the key issue which has arisen in relation to Generative AI models. It is one thing to use tech, another to be at the mercy of it.

It is a major issue not just in the UK, but around the world. Getty and the New York Times are suing in the United States, so too many writers, artists and musicians and it was at the root of the Hollywood Actor and Writers strike last year .

Here in the UK, as the Government’s intentions have become clearer the temperature has risen. We have seen the creation of a new campaign -Creative Rights in AI Coalition (CRAIC) across the creative and news industries and Ed Newton-Rex raising over 30,000 signatories from creators and creative organisations.

But with the new government consultation which came out a few days ago we are now faced with a proposal regarding text and data mining exception which we thought was settled under the last Government. It starts from the false premise of legal uncertainty.

As the News Media Association say:

The government’s consultation is based on the mistaken idea—promoted by tech lobbyists and echoed in the consultation—that there is a lack of clarity in existing copyright law. This is completely untrue: the use of copyrighted content by Gen AI firms without a license is theft on a mass scale, and there is no objective case for a new text and data mining exception.

There is no lack of clarity over how AI developers can legally access training data. UK law is absolutely clear that commercial organisations – including Gen AI developers – must license the data they use to train their Large Language Models (“LLMs”).

Merely because AI platforms such as Stability AI  are resisting claims doesn’t mean the law in the UK is uncertain. There is no need for developers to find ‘it difficult to navigate copyright law in the UK’.

AI developers have already in a number of cases reached agreement with between news publishers. OpenAI has signed deals with publishers like News Corp, Axel Springer, The Atlantic, and Reuters, offering annual payments between $1 million and $5 million, with News Corp’s deal reportedly worth $250 million over five years.

There can be no excuse of market failure. There are well established licensing solutions administered by a variety of well-established mechanisms and collecting societies. There should be no uncertainty around the existing law. We have some of the most effective collective rights organisations in the world. Licensing is their bread and butter.

The Consultation paper says that “The government believes that the best way to achieve these objectives is through a package of interventions that can balance the needs of the two sectors” Ministers Lord Vallance, and Feryal Clark MP seem to think we need a balance between the creative industries and the tech industries. But what kind of balance is this?

The government is proposing to change the UK’s copyright framework by creating a text and data mining exception where rights holders have not expressly reserved their rights—in other words, an ‘opt-out’ system, where content is free to use unless a rights holder proactively withholds consent. To complement this, the government is proposing: (a) transparency provisions; and (b) provisions to ensure that rights reservation mechanisms are effective.

The government has stated that it will only move ahead with its preferred ‘rights reservation’ option if the transparency and rights reservation provisions are ‘effective, accessible, and widely adopted’. However, it will be up to Ministers to decide what provisions meet this standard, and it is clear that the government wishes to move ahead with this option regardless of workability, without knowing if their own standards for implementation can be met.

Although it is absolutely clear that we know that use of copyright works to train AI models is contrary to UK copyright law, the laws around transparency of these activities haven’t caught up. As well as using pirated e-books in their training data, AI developers scrape the internet for valuable professional journalism and other media in breach of both the terms of service of websites and copyright law, for use in training commercial AI models.

At present, developers can do this without declaring their identity, or they may use IP scraped to appear in a search index for the completely different commercial purpose of training AI models.

How can rights owners opt-out of something they don’t know about? AI developers will often scrape websites, or access other pirated material before they launch an LLM in public. This means there is no way for IP owners to opt-out of their material being taken before its inclusion in these models. And once used to train these models, the commercial value has already been extracted from IP scraped without permission with no way to delete data from those models.

The next wave of AI models responds to user queries by browsing the web to extract valuable news and information from professional news websites. This is known as Retrieval Augmented Generation-RAG. Without payment for extracting this commercial value, AI agents built by companies such as Perplexity, Google and Meta, will effectively free ride on the professional hard work of journalists, authors and creators. At present such crawlers are hard to block.

This is incredibly concerning, given that no effective ‘rights reservation’ system for the use of content by Gen AI models has been proposed or implemented anywhere in the world, making the government proposals entirely speculative.

As the NMA also say What the government is proposing is an incredibly unfair trade-off—giving the creative industries a vague commitment to transparency, whilst giving the rights of hundreds of thousands of creators to Gen AI firms. While creators are desperate for a solution after years of copyright theft by Gen AI firms, making a crime legal cannot be the solution to mass theft.

We need transparency and clear statement about copyright. We absolutely should not expect artists to have to opt out. AI developers must: be transparent about the identity of their crawlers; be transparent about the purposes of their crawlers; and have separate crawlers for distinct purposes.

Unless news publishers and the broader creative industries can retain control over their data – making UK copyright law enforceable – AI firms will be free to scrape the web without remunerating creators. This will not only reduce investment in trusted journalism, but it will ultimately harm innovation in the AI sector. If less and less human-authored IP is produced, tech developers will lack the high-quality data that is the essential fuel in generative AI.

Amending UK law to address the challenges posed by AI development, particularly in relation to copyright and transparency, is essential to protect the rights of creators, foster responsible innovation, and ensure a sustainable future for the creative industries.

This should apply regardless of which country the scraping of copyright material takes place if developers market their product in the UK, regardless of where the training takes place.

It will also ensure that AI start-ups based in the UK are not put at a competitive disadvantage due to the ability of international firms to conduct training in a different jurisdiction

It is clear that AI developers have used their lobbying clout to persuade the government that a new exemption from copyright in their favour is required. As a result, the government seem to have sold out to the tech bros.

In response the creative industries and supporters such as myself will be vigorously opposing government plans for a new text and data mining exemption and ensuring we get answers to our questions:

What led the government to do a u-turn on the previous government’s decision to drop the text and data mining exemption it proposed?

What estimate of the damage to the creative industries it has made of implementing its clearly favoured option of a TDM plus opt out?

Is damaging the most successful UK economic sector for the benefit of US AI developers what it means by balance?

Why it has not included the possibility of an opt in to a TDM in its consultation paper options?

What is the difference between rights reservation and opting out? Isn’t this pure semantics?

What examples of successful workable opt outs or rights reservation from TDM’s can it draw on particularly for small rights holders? What research has it done? the paper essentially admits that effective technology is not there yet. Isn’t it clear that the EU opt out system under the Copyright Directive has not delivered clarity?

What regulatory mechanism if any does the government envisage if its proposal for a TDM with rights reservation/opt out is adopted? How are creators going to be sure any new system would work in the first place?

 

 

 

 

 

 

 

 


We Need Better Protection for Citizens in the Face of Automated Decision Making

The second Reading of my Private Members Bill tool place recently. It is designed to give greater rights to all of us who are subject to AI and Automated decision making in government which is becoming increasingly prevalent with the enthusiasm of the new Labour government to "digitally transform" our public services.

 I thank Big Brother Watch, the Public Law Project and the Ada Lovelace Institute, which, each in their own way, have provided the evidence and underpinned my resolve to ensure that we regulate the adoption of algorithmic and AI tools in the public sector, which are increasingly being used across it to make and support many of the highest-impact decisions affecting individuals, families and communities across healthcare, welfare, education, policing, immigration and many other sensitive areas of an individual’s life. I also thank the Public Bill Office, the Library and other members of staff for all their assistance in bringing this Bill forward and communicating its intent and contents, and I thank all noble Lords who have taken the trouble to come to take part in this debate this afternoon.

The speed and volume of decision-making that new technologies will deliver is unprecedented. They have the potential to offer significant benefits, including improved efficiency and cost effectiveness in government operations, enhanced service delivery and resource allocation, better prediction and support for vulnerable people and increased transparency in public engagement. However, the rapid adoption of AI in the public sector also presents significant risks and challenges, with the potential for unfairness, discrimination and misuse through algorithmic bias and the need for human oversight, a lack of transparency and accountability in automated decision-making processes and privacy and data protection concerns.

Incidents such as the 2020 A-level and GCSE grading fiasco, where an algorithmic approach saw students, particularly those from lower-income areas, unfairly miss out on university places when an algorithm was used to estimate grades from exams that were cancelled because of Covid-19, have starkly illustrated the dangers of unchecked algorithmic systems in public administration disproportionately affecting those from lower-income backgrounds. That led to widespread public outcry and a loss of trust in government use of technology.

Big Brother Watch’s investigations have revealed that councils across the UK are conducting mass profiling and citizen scoring of welfare and social care recipients. Its report, entitled Poverty Panopticon [The Hidden Algorithms Shaping Britains Welfare State], uncovered alarming statistics. Some 540,000 benefits applicants are secretly assigned fraud risk scores by councils’ algorithms before accessing housing benefit or council tax support. Personal data from 1.6 million people living in social housing is processed by commercial algorithms to predict rent non-payers. Over 250,000 people’s data is processed by secretive automated tools to predict the likelihood of abuse, homelessness or unemployment.

Big Brother Watch criticises the nature of these algorithms, stating that most are secretive, unevidenced, incredibly invasive and likely discriminatory. It argues that these tools are being used without residents’ knowledge, effectively creating tools of automated suspicion. The organisation rightly expressed deep concern that these risk-scoring algorithms could be disadvantaging and discriminating against Britain’s poor. It warns of potential violations of privacy and equality rights, drawing parallels to controversial systems like the Metropolitan Police’s gangs matrix database, which was found to be operating unlawfully. From a series of freedom of information requests last June, Big Brother Watch found that a flawed DWP algorithm wrongly flagged 200,000 housing benefit claimants for possible fraud and error, which meant that thousands of UK households every month had their housing benefit claims unnecessarily investigated.

In August 2020, the Home Office agreed to stop using an algorithm to help sort visa applications after it was discovered that the algorithm contained entrenched racism and bias, and following a challenge from the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove. The algorithm essentially created a three-tier system for immigration, with a speedy boarding lane for white people from the countries most favoured by the system. Privacy International has raised concerns about the Home Office's use of a current tool called Identify and Prioritise Immigration Cases—IPIC—which uses personal data, including biometric and criminal records to prioritise deportation cases, arguing that it lacks transparency and may encourage officials to accept recommended decisions without proper scrutiny.

Automated decision-making has been proven to lead to harms in privacy and equality contexts, such as in the Harm Assessment Risk Tool, which was used by Durham Police until 2021, and which predicted reoffending risks partly based on an individual’s postcode in order to inform charging decisions. All these cases illustrate how ADM can perpetuate discrimination. The Horizon saga illustrates how difficult it is to secure proper redress once the computer says no.

There is no doubt that our new Government are enthusiastic about the adoption of AI in the public sector. Both the DSIT Secretary of State and Feryal Clark, the AI Minister, are on the record about the adoption of AI in public services. They have ambitious plans to use AI and other technologies to transform public service delivery. Peter Kyle has said:

“We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services”,

and

“bringing together digital, data and technology experts from across Government under one roof, my Department will drive forward the transformation of the state”.—[Official Report, Commons, 2/9/24; col. 89.]

Feryal Clarke has emphasised the Administration’s desire to “completely transform digital Government” with DSIT. As the Government continue to adopt AI technologies, it is crucial to balance the potential benefits with the need for responsible and ethical implementation to ensure fairness, transparency and public trust.

The Ada Lovelace Institute warns of the unintended consequences of AI in the public sector, including the risk of entrenching existing practices, instead of fostering innovation and systemic solutions. As it says, the safeguards around automated decision-making, which exist only in data protection law, are therefore more critical than ever in ensuring people understand when a significant decision about them is being automated, why that decision is made, and have routes to challenge it, or ask for it to be decided by a human.

Our citizens need greater, not less, protection, but rather than accepting the need for these, we see the Government following in the footsteps of their predecessor by watering down such rights as there are under GDPR Article 22 not to be subject to automated decision-making. We will, of course, be discussing these aspects of the Data (Use and Access) Bill in Committee next week.

ADM safeguards are critical to public trust in AI, but progress has been glacial. Take the Algorithmic Transparency Recording Standard, which was created in 2022 and is intended to offer a consistent framework for public bodies to publish details of the algorithms used in making these decisions. Six records were published at launch, and only three more seem to have been published since then. The previous Government announced earlier this year that the implementation of the Algorithmic Transparency Recording Standard will be mandatory for departments. Minister Clark in the new Government has said,

“multiple records are expected to be published soon”,

but when will this be consistent across government departments? What teeth do the Central Digital and Data Office and the Responsible Technology Adoption Unit, now both within DSIT, have to ensure the adoption of the standard, especially in view of the planned watering down of the Article 22 GDPR safeguards? Where is the promised repository for ATRS records? What about the other public services in local government too?

The Public Law Project, which maintains a register called Tracking Automated Government, believes that in October last year there were more than 55 examples of public ADM systems use. Where is the transparency on those? The fact is that the Government’s Algorithmic Transparency Recording Standard, while a step in the right direction, remains voluntary and lacks comprehensive adoption or indeed a compliance mechanism or opportunity for redress. The current regulatory landscape is clearly inadequate to address these challenges. Despite the existing guidance and framework, there is no legally enforceable obligation on public authorities to be transparent about their use of ADM and algorithmic systems, or to rigorously assess their impact.

To address these challenges, several measures are needed. We need to see the creation of and adherence to ethical guidelines and accountability mechanisms for AI implementation; a clear regulatory framework and standards for use in the public sector; increased transparency and explainability of the adoption and use of AI systems; investment in AI education; and workforce development for public sector employees. We also need to see the right of redress, with a strengthened right for the individuals to challenge automated decisions.

My Bill aims to establish a clear mandatory framework for the responsible use of algorithmic and automated decision-making systems in the public sector. It will help to prevent the embedding of bias and discrimination in administrative decision-making, protect individual rights and foster public trust in government use of new technologies.

I will not adumbrate all the elements of the Bill. In an era when AI and algorithmic systems are becoming increasingly central to government ambitions for greater productivity and public service delivery, this Bill, I hope noble Lords agree, is crucial to ensuring that the benefits of these technologies are realised while safeguarding democratic values and individual rights. By ensuring that ADM systems are used responsibly and ethically, the Bill facilitates their role in improving public service delivery, making government operations more efficient and responsive.

The Bill is not merely a response to past failures but a proactive measure to guide the future use of technology within government and empower our citizens in the face of these powerful new technologies. I hope that the House and the Government will agree that this is the way forward.


Lord C-J Commentary on the new Government's Science and Technology Programme

Sadly we only had 5 minutes speaking time in the recent Kings Speech debate . Here is an an extended version of my speech which goes into greater depth as to what I believe the Government should be doing in this area if it is to fulfill its growth through innovation agenda and expresses some caveats about how they plan to do this.

When we debated the New government’s proposals in the Kings speech recently the House of Lords  gave  a particularly warm welcome to Lord Vallance of Balham-formerly Sir Patrick Vallance-  as the new Minister of State in the Department.  While the Government’s Chief Scientific Adviser we know from the book “the Long Shot” how he played an  critical role in the establishment of the UK Vaccine Taskforce, which was set up in April 2020 in response to the COVID-19 pandemic. He was pivotal in the recruitment of Dame Kate Bingham to chair the Vaccine Taskforce and in organizing the overall strategy for the UK development and distribution of COVID-19 vaccines. For that we should be eternally grateful. 

 I welcome the Government’s growth through innovation agenda and mission to enhance  public services through the deployment of new technology and also the  concentration of digital functions in DSIT  and that it will become  the centre for “digital expertise and delivery in government,improving how the government and public services interact with citizens.”  in the words of the new Secretary  of State, Peter Kyle. 

The Government is expanding the department’s scope and size by bringing in experts in data, digital, and AI from the Government Digital Service, the Incubator for AI (i.AI), and the Central Digital Data Office to unite efforts to implement digital transformation of public services under one roof.  There is great potential in justice, education, healthcare to name but three areas. 

This is crucial particularly in the adoption of  innovative technologies and tools in our healthcare for which Liberal Democrats believe there should be ring-fenced budgets. We need to be ensuring interoperability of IT systems too.

They government have committed too to modernising public sector procurement frameworks to enable start-ups and SMEs to drive public sector innovation and better public services. Will , however, clear, transparent framework of standards incorporating ethical principles be established? Public sector adoption is very desirable but requires trust on the part of the public/ and the citizen For instance we need to ensure that citizens can assert their rights when faced with automated decision making or live facial recognition

It has felt, under the previous regime, that universities have been under continual threat from government rather than valued as the engines of knowledge and growth and we need to be far more internationally outward looking, in particular fixing our relationship with the EU- using science and technology to address societal challenges for a more resilient and prosperous future in the words of the Royal Society.  

I welcome the new Industrial Strategy Council. Does this mean we can plan for 10 years of stability and opportunity creation in science and tech sector? Successive policy changes to the R&D tax regime over the past several years have created uncertainty and additional red-tape for SMEs, putting at risk the UK’s reputation as a location for innovative businesses.We need to give businesses certainty and incentivise them to invest in new technologies to grow the economy,  create good jobs and tackle the climate crisis. 

Opening up what can be a blocked  pipeline all the way from R & D to commercialisation, from university spinout through start up to scale up and IPO, and crowding in and derisking private investment through the National Wealth Fund, the British Business Bank  and post Mansion House pension reforms, are crucial with all the local, regional, national and UK wide aspects, recognizing the importance of innovation clusters and centres of excellence. We need to tackle regional disparities and develop the innovation clusters with greater devolution to combined authorities

Digital Skills and Digital literacy are also crucial but to deploy digital tools successfully we also need a  pipeline of creative collaborative and critical thinking skills. A massive skills and upskilling agenda is needed in the face of technology advances. The focus in training should be on lifelong skills grants, reforming the apprenticeship levy, and boosting vocational training and apprenticeships and many of the governments proposals in this respect are welcome. 

In this context, as the the chair of a university governing council I very much welcome the Government’s new tone on the value of universities, of long term settlements,  and of resetting relations with Europe and international research collaboration.

The role of university research and spinouts is crucial . The Research Excellence Framework has the perverse incentive of discouraging cooperation. We should be encouraging strategic partnerships in research especially internationally. We need to be full throated members of Horizon -the uncertainty has been extremely damaging to collaboration. I hope the government will now  commit to joining the European Innovation Council as well

Last year Labour set out its plan for the life sciences.It committed to the investment of £10bn into R&D. Further, the plan said that Labour would see the creation of 100,000 jobs in the life sciences sector by 2030. The document contains a range of further welcome pledges including strengthening the Office for Life Sciences and the Life Sciences Council, and  to bring laboratory clusters within the scope of the ‘Nationally significant infrastructure regime’ in England.

We need to ensure Government spending on R&D keeps pace with other nations, and establish a long-term strategy for science, research and innovation that commands cross party support.Research, development and innovation are crucial to driving productivity growth, yet our current levels of R&D investment and productivity lag the G7. I hope this means that we will soon see whether spending plans for government  R & D expenditure by 2030 and 2035 match their words. 

And disproportionately high overseas researcher visa costs  MUST be lowered as Lord Vallance recommended in his Digital Technology Review.  UK visa costs are up to 17 times higher than other leading science nations.The Royal Society have called this a  “punitive tax on talent”. 

But support for innovation should not be unconditional or at any cost. I hope this government will not fall into the trap of viewing regulation as necessarily the enemy of innovation. We need guardrails to ensure that, for example, AI adoption leads to public benefit.

I hope therefore that the reference to AI regulation in the King's Speech, but failure to announce a bill, is only a timing issue. What IS the Government’s intention especially given an AI  bill was heavily trailed in the media?  

With AI technologies continuing to develop at an exponential rate, clarity on regulation is needed by developers and adopters.There is the question too as to what extent the new government will depart from the current sectoral approach to regulating AI and adopt a cross-sectoral approach. What does the King's Speech reference to regulating "the most powerful artificial intelligence models" actually refer to? Will the government be launching yet another consultation on AI regulation?

There is no doubt we need to seize the opportunities of AI,  whilst making sure we mitigate the risks of AI, ensuring ethical standards for AI development and use are adopted.

 Liberal Democrats believe we need to create a clear, workable and well-resourced cross-sectoral regulatory framework for artificial intelligence that:

  • Promotes innovation while creating certainty for AI users, developers and investors.
  •  Establishes transparency and accountability for AI systems in the public sector.
  •  Ensures the use of personal data and AI is unbiased, transparent and  accurate, and respects the privacy of innocent people

The government in particular should lead the way in ensuring that there is a high level of transparency and opportunity for redress when Algorithmic and automated systems are used in government. I commend my new private members bill (the Public Authority Algorithmic and Automated Decision-Making Systems Bill) to it! 

The government should also negotiate the UK’s participation in the Trade and Technology Council with the US  and the EU, so we can play a leading role in global AI regulation, and we should work with international partners in agreeing common global standards for AI risk and impact assessment, testing, training monitoring and audit. 

As regards AI regulation in  the Kings Speech itself we are promised  a Product Safety and  Metrology bill which could require alignment of AI driven products with the EU AI Act which seems to be putting the cart well in front of the AI regulatory horse. 

We do need however to ensure that high risk systems are mandated to adopt international ethical and safety standards.At the same time in In this age of IOT we should require all  suppliers to provide a short, clear version of their terms and conditions, setting  out the key facts as they relate to individuals’ data and privacy.

As regards the creative industries there are clearly great opportunities in relation to the use of AI but there are also challenges and big questions over authorship and intellectual property and many artists feel threatened-the root cause of the recent Hollywood writers and actors strike. What is the government’s approach?

We need to establish very clearly that Generative AI systems need a licence to ingest copyright material for training purposes-just as Mumsnet and the New York Times are asserting- and that there is an obligation of transparency in the use of data sets and original  content.

Lord Vallance is on record as wanting certainty in the relationship between IP rights and generative AI for innovator and investor confidence. And this should be the case for for creatives too. Copyright content needs to be properly remunerated by the tech platforms. The bill needs to make clear that platforms profit from content and need to pay properly and fairly, on benchmarked terms and with reference to value for end users when content is use for training Large Language Models.

And when will the government  set up the promised new Regulatory Innovation Office? This was promised as an organisation to help “regulators to update regulation, speed up approval timelines and co-ordinate issues that span existing boundaries”. and as a “pro-innovation body” designed to “set targets for tech regulators, end uncertainty for businesses, turbocharge output, and boost economic growth”. We need in particular to know whether it will replace the Digital Regulators Cooperation Forum.

We must also ensure we have the right climate for FDI. The Harrington Report called for a new Business investment Strategy for the Office for Investment. Despite the previous government’s Life  Sciences  Vision we have seen pharma company Eli Lilley pulling investment on laboratory space in London because the UK “does not invite inward investment at this time”.  Astra Zeneca decided to build its next plant in Ireland  because of the U.K.’s “discouraging” tax rate. 

We also need to modernise employment rights to make them fit for the age of the gig economy,including by establishing a new ‘dependent contractor’ employment status in between  employment and self-employment, with entitlements to basic rights such as  minimum earnings levels, sick pay and holiday entitlement.

There is a great need for need for greater  diversity and inclusion in the AI workforce and science and technology more broadly. Only one in four senior tech employees in the UK are women, and only 14% from ethnic minorities. 

I hope the Government too is fully committed despite its growth agenda to a full hearted support for the Competition and Markets Authority in the use of its powers under the new Digital Markets Act. I welcome the CMA’s market investigation into Cloud Services and its reassurance that it is looking broadly at the anti-competitive practices of the service providers such as vendor lock-in tactics and non-competitive procurement. 

Then again how will the government kickstart better progress on Project Gigabit? Given the competitive model for rollout of broadband services that has been chosen, investors in alternative providers to the incumbents need reassurance that their investment is going onto a level playing field and not one tilted in favour of the incumbents. 

Also in terms of vital cross departmental working, joining up government on Science and Technology policy we need to know what  the role will be of the National Science and Technology Council and what are its key priorities.

There no mention in Labour’s manifesto on the potential impact of AI on the  workplace.The TUC and Institute for the Future of Work are among those who have called for new legislation to create further legal protections for workers and employers in relation to the use of AI. The government should introduce safeguards against the invasion of privacy through surveillance technology and discriminatory algorithmic decision-making in the workplace along the lines of the TUC draft bill and algorithmic impact assessment along the lines of IFOW’s proposals. 

The Government’s will also need to decide how to follow up on the recommendations of recent key Reports such as

  • Professor Dame Angela McLean’s Review of Life Sciences
  • The Vallance Review of Pro-innovation Regulation of Digital Technologies
  • The Independent Review of Research Bureaucracy by Professor Adam Tickell
  • The Independent Review of the UKRI by Sir David Grant
  • The Independent Review of the UK’s Research, Development and Innovation Landscape by Sir Paul Nurse
  • The O’Shaughnessy Report on Clinical Trials
  • The Independent Review of the Future of Compute by Professor Zoubin Ghahramani FRS and 
  • The Independent Review of University Spin-out Companies by Professor Irene Tracey and Dr. Andrew Williamson

More broadly it will need to set out its  approach to the science and technology framework for DSIT set out by the previous government in 2023 with its 10 priority areas  Will this be revised? If so they need to set measurable targets and key outcomes in the priority areas. The  government will also  need to take a clear view on  the key technologies we should be assisting in developing and commercialising 

Then there are the pre existing financial commitments in the science and technology field. The Chancellor has said she will be checking all the previous government’s commitments for affordability. Which of  the previous Government’s financial commitments will she confirm? For instance 

The  £7.4 million upskilling fund pilot to help SMEs develop AI skills.

Investing up to £100 million in the Alan Turing Institute over the next five years (up from £50 million)

The £100 million investment by the British Business Bank into ICG,in respect of  the Long-term Investment for Technology and Science (LIFTS) initiative

The £1.1 billion funding for 65 Centres for Doctoral Training (CDTs) through the Engineering and Physical Sciences Research Council (EPSRC), covering key technologies like AI and engineering biology

As regards the bills in the Kings Speech I look forward to seeing the details but the Digital Information and Smart Data bill does seem to be heading in the right direction in the areas being reinstated. The retention and enhancement of public trust in data use and sharing is the overriding need so that  the potential of data can be unleashed through better trusted sharing of data.  It is really important that we do more to educate the public about how and where our data is used and what powers individuals have to find out this information

 I hope other than a few clarifications, especially in the research area, and in terms of the constitution of the ICO  we are not going exhume some of the worst areas  of the old DPDI bill and we have ditched the idea of a Brexit EU divergence Dividend by the watering down of so many data subject rights.

Will the Government give a firm commitment to safeguard our data adequacy with the EU? Will the bill  introduce the promised  ban on the creation of sexually explicit deepfakes?

I also hope that the Government will confirm that the intent of the reinstated Digital Verification provisions is not compulsory national Digital ID but the creation of a market in digital ID providers that give choice to the citizen.

Given that LinesearchbeforeUdig, or LSBUD is claimed to already achieve the aims of NAUR, to be more widely used than the National Underground Assert Register NUAR and be more cost-effective, I hope also that Ministers will meet LSBUD and provide us all with much greater clarity around these proposals. 

I hope that we can include other positive spects of the late unlamented DPDI Bill  in the bill: More action on online fraud, digital identity theft, deepfakes in elections Misinformation and disinformation, misogyny as a hate crime, there is quite a list of possibilities. Together with new models of personal data control which were advocated as long ago as 2017 with the Hall Pesenti review, especially through new data communities and institutions and an enhanced ability to exercise our right to data portability, especially in real-time and more regulatory oversight over use of biometrics and biometric technologies. 

 I of course welcome the pledge to give coroners more  powers to access information held by technology companies after a child’s death AND to banning the creation of sexually explicit deepfakes.

As regards the Cyber Security and Resilience Bill, events of recent days have made it clear we are not just talking about threats from bad actors. It reminds us how dependent we are on just a few overly dominant major tech companies. With Microsoft and AWS enjoying a combined UK market share of around 70-90%, according to the Competition and Markets Authority’s own research, the lack of competition presents a serious concerns for our nation's security and resilience. There needs to to be a rethink on critical national infrastructure such as cloud services and business software which are now essential public utilities and also how we are wholesale replacing reliable analogue communication with digital systems without backup. 

In the bill I hope will we see the long awaited amendment of the Computer Misuse Act to include a statutory public interest defence, as called for by Cyber Up, to allow white hat research into computer systems as the Vallance report recommended.  The rules for computer evidence must be changed too. We must have no more Horizon scandals!

 


Data Protection and Digital Information Bill lost in wash up-Hurray!


Lords Debate Report on AI in Weapon Systems

Recently the House of Lords Debated the Report of the AI in  Weapon Systems Committee Proceed with Caution.

This is an edited version of what I said

Autonomous weapon systems present some of the most emotive and high-risk challenges posed by AI. We have heard a very interesting rehearsal of some of the issues surrounding use and possible benefits, but particularly the risks. I believe that the increasing use of drones in particular, potentially autonomously, in conflicts such as Libya, Syria and Ukraine and now by Iran and Israel, together with AI targeting systems such as Lavender, highlights the urgency of addressing the governance of weapon systems.

The implications of autonomous weapons systems—AWS—are far-reaching. There are serious risks to consider, such as escalation and proliferation of conflict, accountability and lack of accountability for actions,

and cybersecurity vulnerabilities. There is the lack of empathy and kindness qualities that humans are capable of in making military decisions.  There is misinformation and disinformation, which is a new kind of warfare.

Professor Stuart Russell, in his Reith lecture on this subject in 2021, painted a stark picture of the risks posed by scalable autonomous weapons capable of destruction on a mass scale. This chilling scenario underlines the urgency with which we must approach the regulation of AWS. The UK military sees AI as a priority for the future, with plans to integrate “boots and bots” to quote a senior military officer.

The UK integrated review of 2021 made lofty commitments to ethical AI development. Despite this and the near global consensus on the need to regulate AWS, the UK has not yet endorsed limitations on their use. The UK’s defence AI strategy and its associated policy statement, Ambitious, Safe, Responsible, acknowledged the line that should not be crossed regarding machines making combat decisions but lacked detail on where this line is drawn, raising ethical, legal and indeed moral concerns.

As we explored this complex landscape as a committee—and it was quite a journey for many of us—we found that, while the term AWS is frequently used, its definition is elusive. The inconsistency in how we define and understand AWS has significant implications for the development and governance of these technologies. However, the committee demonstrated that a working definition is possible, distinguishing between fully and partially autonomous systems. This is clearly still resisted by the Government, as their response has shown.

The current lack of definition allows for the assertion that the UK neither possesses nor intends to develop fully autonomous systems, but the deployment of autonomous systems raises questions about accountability, especially in relation to international humanitarian law. The Government emphasise the sufficiency of existing international humanitarian law while a human element in weapon deployment is retained. The Government have consistently stated that UK forces do not use systems that deploy lethal force without human involvement, and I welcome that.

Despite the UK’s reluctance to limit AWS, the UN and other states advocate for specific regulation. The UN Secretary-General, António Guterres, has called autonomous weapons with life-and-death decision-making powers “politically unacceptable, morally repugnant” and deserving of prohibition, yet an international agreement on limitation remains elusive.

In our view, the rapid development and deployment of AWS necessitates regulatory frameworks that address the myriad of challenges posed by these technologies. The relationship between our own military and the  private sector makes it even more important that we address the challenges posed by these technologies and ensure compliance with international law to maintain ethical standards and human oversight. I share the optimism of the noble Lord, Lord Holmes, that this is both possible and necessary.

Human rights organisations have urged the UK to lead in establishing new international law on autonomous weapon systems to address the current deadlock in conventional weapons conventions, and we should do so. There is a clear need for the UK to play an active role in shaping the nature of future military engagement.

A historic moment arrived last November with the UN’s first resolution on autonomous weapons, affirming the application of international law to these systems and setting the stage for further discussion at the UN General Assembly. The UK showed support for the UN resolution that begins consultations on these systems, which I very much welcome. The Government have committed also to explicitly ensure human control at all stages of an AWS’s life cycle. It is essential to have human control over the deployment of the system, to ensure both human moral agency and compliance with international humanitarian law.

However, the Government still have a number of questions to answer. Will they respond positively to the call by the UN Secretary-General and the International Committee of the Red Cross that a legally binding instrument be negotiated by states by 2026? How do the Government intend to engage at the Austrian Government’s conference “Humanity at the Crossroads”, which is taking place in Vienna at the end of this month? What is the Government’s assessment of the implications of the use of AI targeting systems under international humanitarian law? Can the Government clarify how new international law on AWS would be a threat to our defence interests? What factors are preventing the Government adopting a definition of AWS, as the noble Lord, Lord Lisvane, asked? What steps are being taken to ensure meaningful human involvement throughout the life cycle of AI-enabled military systems? Finally, will the Government continue discussions at the Convention on Certain Conventional Weapons, and continue to build a common understanding of autonomous weapon systems and elements of the constraints that should be placed on them?

 The committee rightly warns that time is short for us to tackle the issues surrounding AWS. I hope the Government will pay close and urgent attention to its recommendations.


Lord Holmes Private Members bill a "stake in the ground" says Lord C-J

Lord Holmes of Richmond recently introduced his Private Members Bill -The Artificial Intelligence (Regulation) Bill.

This may not go as far in regulating AI as many want to see but it is a good start. This what Lord Holmes says about it on his own website 

https://lordchrisholmes.com/artificial-intelligence-regulation-bill/

and this is what I said at its second reading recently

My Lords, I congratulate the noble Lord, Lord Holmes, on his inspiring introduction and on stimulating such an extraordinarily good and interesting debate.

The excellent House of Lords Library guide to the Bill warns us early on:

“The bill would represent a departure from the UK government’s current approach to the regulation of AI”.

Given the timidity of the Government’s pro-innovation AI White Paper and their response, I would have thought that was very much a “#StepInTheRightDirection”, as the noble Lord, Lord Holmes, might say.

There is clearly a fair wind around the House for the Bill, and I very much hope it progresses and we see the Government adopt it, although I am somewhat pessimistic about that. As we have heard in the debate, there are so many areas where AI is and can potentially be hugely beneficial. However, as many noble Lords have emphasised, it also carries risks, not just of the existential kind, which the Bletchley Park summit seemed to address, but others mentioned by noble Lords today, such as misinformation, disinformation, child sexual abuse, and so on, as well as the whole area of competition—the issue of the power and the asymmetry of these big tech AI systems and the danger of regulatory capture.

It is disappointing that, after a long gestation of national AI policy-making, which started so well back in 2017 with the Hall-Pesenti review, contributed to by our own House of Lords Artificial Intelligence Committee, the Government have ended up by producing a minimalist approach to AI regulation. I liked the phrase used by the noble Lord, Lord Empey, “lost momentum”, because it certainly feels like that after this period of time.

The UK’s National AI Strategy, a 10-year plan for UK investment in and support of AI, was published in September 2021 and accepted that in the UK we needed to prepare for artificial general intelligence. We needed to establish public trust and trustworthy AI, so often mentioned by noble Lords today. The Government had to set an example in their use of AI and to adopt international standards for AI development and use. So far, so good. Then, in the subsequent AI policy paper, AI Action Plan, published in 2022, the Government set out their emerging proposals for regulating AI, in which they committed to develop

“a pro-innovation national position on governing and regulating AI”,

to be set out in a subsequent governance White Paper. The Government proposed several early cross-sectoral and overarching principles that built on the OECD principles on artificial intelligence: ensuring safety, security, transparency, fairness, accountability and the ability to obtain redress.

Again, that is all good, but the subsequent AI governance White Paper in 2023 opted for a “context-specific approach” that distributes responsibility for embedding ethical principles into the regulation of AI systems across several UK sector regulators without giving them any new regulatory powers. I thought the analysis of this by the noble Lord, Lord Young, was interesting. There seemed to be no appreciation that there were gaps between regulators. That approach was confirmed this February in the response to the White Paper consultation.

Although there is an intention to set up a central body of some kind, there is no stated lead regulator, and the various regulators are expected to interpret and apply the principles in their individual sectors in the expectation that they will somehow join the dots between them. There is no recognition that the different forms of AI are technologies that need a comprehensive cross-sectoral approach to ensure that they are transparent,

explainable, accurate and free of bias, whether they are in an existing regulated or unregulated sector. As noble Lords have mentioned, discussing existential risk is one thing, but going on not to regulate is quite another.

Under the current Data Protection and Digital Information Bill, data subject rights regarding automated decision-making—in practice, by AI systems—are being watered down, while our creatives and the creative industries are up in arms about the lack of support from government in asserting their intellectual property rights in the face of the ingestion of their material by generative AI developers. It was a pleasure to hear what the noble Lord, Lord Freyberg, had to say on that.

For me, the cardinal rules are that business needs clarity, certainty and consistency in the regulatory system if it is to develop and adopt AI systems, and we need regulation to mitigate risk to ensure that we have public trust in AI technology. Regulation is not necessarily the enemy of innovation; it can be a stimulus. That is something that we need to take away from this discussion.

This is where the Bill of the noble Lord, Lord Holmes, is an important stake in the ground, as he has described. It provides for a central AI authority that has a duty of looking for gaps in regulation; it sets out extremely well out the safety and ethical principles to be followed; it provides for regulatory sandboxes, which we should not forget are an innovation invented in the UK; and it provides for AI responsible officers and for public engagement. Importantly, it builds in a duty of transparency regarding data and IP-protected material where they are used for training purposes, and for labelling AI-generated material, as the noble Baroness, Lady Stowell, and her committee have advocated. By itself, that would be a major step forward, so, as the noble Lord knows, we on these Benches wish the Bill very well, as do all those with an interest in protecting intellectual property, as we heard the other day at the round table that he convened.

However, in my view what is needed at the end of the day is the approach that the interim report of the Science, Innovation and Technology Committee recommended towards the end of last year in its inquiry into AI governance: a combination of risk-based cross-sectoral regulation and specific regulation in sectors such as financial services, applying to both developers and adopters, underpinned by common trustworthy standards of risk assessment, audit and monitoring. That should also provide recourse and redress, as the Ada Lovelace Institute, which has done so much work in the area, asserts.

That should include the private sector, where there is no effective regulator for the workplace, mentioned, and the public sector, where there is no central or local government compliance mechanism; no transparency yet in the form of a public register of use of automated decision-making, despite the promised adoption of the algorithmic recording standard; and no recognition by the Government that explicit legislation and/or regulation for intrusive

AI technologies used in the public sector, such as live facial recognition and other biometric capture, is needed. Then, of course, we need to meet the IP challenge. We need to introduce personality rights to protect our artists, writers and performers. We need the labelling of AI-generated material alongside the kinds of transparency duties contained in the noble Lord’s Bill.

Then there is another challenge, which is more international. We have world-beating AI researchers and developers. How can we ensure that, despite differing regulatory regimes—for instance, between ourselves and the EU or the US—developers are able to commercialise their products on a global basis and adopters can have the necessary confidence that the AI product meets ethical standards?

The answer, in my view, lies in international agreement on common standards such as those of risk and impact assessment, testing, audit, ethical design for AI systems, and consumer assurance, which incorporate what have become common internationally accepted AI ethics. Having a harmonised approach to standards would help provide the certainty that business needs to develop and invest in the UK more readily, irrespective of the level of obligation to adopt them in different jurisdictions and the necessary public trust. In this respect, the UK has the opportunity to play a much more positive role with the Alan Turing Institute’s AI Standards Hub and the British Standards Institution. The OECD.AI group of experts is heavily involved in a project to find common ground between the various standards.

We need a combination of proportionate but effective regulation in the UK and the development of international standards, so, in the words of the noble Lord, Lord Holmes, why are we not legislating? His Bill is a really good start; let us build on it.


New Digital Markets Bill Must Not be Watered Down

The Digital Markets Competition and Consumer Bill had its Second Reading in the House of Lords on the 5th December 2023 and its 3rd Reading on the 26th March 2024  This is an edited version of what I said on each occasion

Second Reading

I thank the Minister for what I thought was a comprehensive introduction that really set the scene for the Bill. As my noble friend said, we very much welcome the Bill, broadly. It is an overdue offspring of the Furman review and, along with so many noble Lords around the House, he gave very cogent reasons, given the dominance that big tech has and the inadequate powers that our competition regulators have had to tackle them. It is absolutely clear around the House that there is great appetite for improving the Bill. I have knocked around this House for a few years, and I have never heard such a measure of agreement at Second Reading.

We seem to have repeated ourselves, but repetition is good. I am sure that in the Minister’s notebook he just has a list saying “agree, agree, agree” as we have gone through the Bill. I very much hope that he will follow the example that both he and the noble Lord, Lord Parkinson, demonstrated on the then Online Safety Bill and will engage across and around the Chamber with all those intervening today, so that we really can improve the Bill.

It is not just size that matters: we must consider behaviour, dominance, market failure and market power. We need to hold on to that. We need new, flexible pro-competition powers and the ability to act ex ante and on an interim basis—those are crucial powers for the CMA. As we have heard from all round the House, the digital landscape, whether it is app stores, cloud services or more, is dominated by the power of certain big tech companies, particularly in AI, with massive expenditure on compute power, advanced semiconductors, large datasets and the scarce technology skills forming a major barrier to entry where the development of generative AI is concerned. We can already see the future coming towards us.

In that context, I very much welcome Ofcom’s decision to refer the hyperscalers in cloud services for an investigation by the CMA. The CMA and the DMU have the capability to deliver the Bill’s aims.the It must have the ability to implement the new legislative powers. Unlike some other commentators, we believe, as my noble friend said, that the CMA played a positively useful role in the Activision Blizzard-Microsoft merger. It is crucial that the CMA is independent of government. All around the House, there was comment about the new powers of the Secretary of State in terms of guidance. The accountability to Parliament will also be crucial, and that was again a theme that came forward. We heard about the Joint Committee proposals made by both the committee of the noble Baroness, Lady Stowell, and the Joint Committee on the Online Safety Bill.

We need to ensure that that scrutiny is there and, as the Communications and Digital Committee also said, that the DMU is well resourced and communicates its priorities, work programmes and decisions regularly to external stakeholders and Parliament.

The common theme across this debate—to mention individual noble Lords, I would have to mention almost every speaker—has been that the Bill must not be watered down. In many ways, that means going back to the original form of the Bill before it hit Report in the Commons. We certainly very much support that approach, whether it is to do with the merits approach to penalties, the explicit introduction of proportionality or the question of deleting the indispensability test in the countervailing benefits provisions. We believe that, quite apart from coming back on the amendments from Report, the Bill could be further strengthened in a number of respects.

In the light of the recent Open Markets Institute report, we should be asking whether we are going far enough in limiting the power of big tech. In particular, as regards the countervailing benefits exemption, as my noble friend said, using the argument of countervailing benefits—even if we went back to the definition from Report—must not be used by big tech as a major loophole to avoid regulatory action. It is clear that many noble Lords believe, especially in the light of those amendments, that the current countervailing benefits exemption provides SMS firms with too much room to evade conduct requirements.

The key thing that unites us is the fact that, even though we must act in consumers’ interests, this is not about short-term consumer welfare but longer-term consumers’ interests; a number of noble Lords from across the House have made that really important distinction.

We believe that there should be pre-notification if a platform intends to rely on this exemption. The scope of the exemption should also be significantly curtailed to prevent its abuse, in particular by providing an exhaustive list of the types of countervailing benefits that SMS firms are able to claim. We would go further in limiting the way in which the exemption operates.

On strategic market status, one of the main strengths of the Bill is its flexible approach. However, the current five-year period does not account for dynamic digital markets that will not have evidence of the position in the market in five years’ time. We believe that the Bill should be amended so that substantial and entrenched market power is mainly based on past data rather than a forward-looking assessment, and that the latter is restricted to a two-year assessment period. The consultation aspect of this was also raised; there should be much greater rights on the consultation of businesses that are not of strategic market status under the Bill.

A number of noble Lords recognised the need for speed. It is not just a question of making sure that the CMA has the necessary powers; it must be able to move quickly. We believe that the CMA should be given the legal power to secure injunctions under the High Court timetable, enabling it to stop anti-competitive activities in days. This would be in addition to the CMA’s current powers.

We have heard from across the House about the final offer mechanism affecting the news media. We believe that a straightforward levy on big tech platforms, redistributed to smaller journalism enterprises, would be a far more equitable approach. However we need to consider in the context of the Bill the adoption by the CMA of the equivalent to Ofcom’s duty in the Communications Act 2003

“to further the interests of citizens”,

so that it must consider the importance of an informed democracy and a plural media when considering its remedies.

The Bill needs to make it clear that platforms need to pay properly and fairly for content, on benchmarked terms and with reference to value for end-users. Indeed, we believe that they must seek permission for the content that they use. As we heard from a number of noble Lords, that is becoming particularly important as regards the large language models currently being developed.

We also believe it is crucial that smaller publishers are not frozen out or left with small change while the highly profitable large publishers scoop the pool. I hope that we will deal with the Daily Telegraph ownership question and the mergers regime in the Enterprise Act as we go forward into Committee, to make sure that the accumulation of social media platforms is assessed beyond the purely economic perspective. The Enterprise Act powers should be updated to allow the Secretary of State to issue a public interest notice seeking Ofcom’s advice on digital media mergers, as well as newspapers, and at the lower thresholds proposed by this Bill.

There were a number of questions related to leveraging. We want to make sure that we have the right approach to that. The Bill does not seem to be drafted properly in allowing the CMA to prevent SMS firms using their dominance in designated activities to increase their power in non-designated activities. We want to kick the tyres on that.

Of course, there are a great many consumer protection issues here, which a number of noble Lords raised. They include fake reviews and the need for collective action. It is important that we allow collective action not just on competition rights but further, through consumer claims, data abuse claims and so on. We should cap the costs for claimants in the Competition Appeal Tribunal.These issues also include misleading packaging.

Nearly every speaker mentioned subscriptions. I do not think that I need to point out to the Minister the sheer unanimity on this issue. We need to get this right because there is clearly support across the House for making sure that we get the provisions right while protecting the income of charities.

There is a whole host of other issues that we will no doubt discuss in Committee: mid-contract price rises, drip pricing, ticket touting, online scams and reforming ADR. We want to see this Bill and the new competition and consumer powers make a real difference. However, we believe that we can do this only with some key changes being made to the Bill, which are clearly common ground between us all, as we have debated the Bill today. We look forward to the Committee proceedings next year—I can say that now—which will, I hope, be very productive, if both Ministers will it so.

Third Reading

I reiterate the welcome that we on these Benches gave to the Bill at Second Reading. We believe it is vital to tackle the dominance of big tech and to enhance the powers of our competition regulators to tackle it, in particular through the new flexible pro-competition powers and the ability to act ex ante and on an interim basis.

We were of the view, and still are, that the Bill needs strengthening in a number of respects. We have been particularly concerned about the countervailing benefits exemption under Clause 29. This must not be used by big tech as a major loophole to avoid regulatory action. A number of other aspects were inserted into the Bill on Report in the Commons about appeals standards and proportionality. During the passage of the Bill, we added a fourth amendment to ensure that the Secretary of State’s power to approve CMA guidance will not unduly delay the regime coming into effect.

As the noble Baroness, Lady Stowell, said, we are already seeing big tech take an aggressive approach to the EU Digital Markets Act. We therefore believe the Bill needs to be more robust in this respect. In this light, it is essential to retain the four key amendments passed on Report and that they are not reversed through ping-pong when the Bill returns to the Commons.

I thank both Ministers and the Bill team. They have shown great flexibility in a number of other areas, such as online trading standards powers, fake reviews, drip pricing, litigation, funding, cooling-off periods, subscriptions and, above all, press ownership, as we have seen today. They have been assiduous in their correspondence throughout the passage of the Bill, and I thank them very much for that, but in the crucial area of digital markets we have seen no signs of movement. This is regrettable and gives the impression that the Government are unwilling to move because of pressure from big tech. If the Government want to dispel that impression, they should agree with these amendments, which passed with such strong cross-party support on Report.

In closing, I thank a number of outside organisations that have been so helpful during the passage of the Bill—in particular, the Coalition for App Fairness, the Public Interest News Foundation, Which?, Preiskel & Co, Foxglove, the Open Markets Institute and the News Media Association. I also thank Sarah Pughe and Mohamed-Ali Souidi in our own Whips’ Office.

Last, but certainly not least, I thank my noble friend Lord Fox for his support and—how shall I put it?—his interoperability.

Given the coalition of interest that has been steadily building across the House during the debates on the Online Safety Bill and now this Bill, I thank all noble Lords on other Benches who have made common cause and, consequently, had such a positive impact on the passage of this Bill. As with the Online Safety Act, this has been a real collaborative effort in a very complex area.


Living with the Algorithm now published!

Living with the Algorithm

Servant or Master?

AI Governance and Policy for the Future

Tim Clement-Jones

Published March 2024

Paperback with flaps, £14.99 ISBN: 9781911397922

A comprehensive breakdown of the AI risks and how to address them.

The rapid proliferation of AI brings with it a potentially massive shift in how society interacts with the digital world. New opportunities and challenges are emerging in unprecedented fashion and speed. AI however, comes with its own risks, including the potential for bias and discrimination, reputational harm, and the potential for widescale redundancy of millions of jobs. Many prominent technologists have voiced their concern at the existential risks to humanity that AI pose. So how do we ensure that AI remains our servant and not our master?

The purpose in this book is to identify and address these key risks looking at current approaches to regulation and governance of AI internationally in both the public and private sector, how we meet and mitigate these challenges, avoid inadequate or ill considered regulatory approaches, and protect ourselves from the unforeseen consequences that could flow from unregulated AI development and adoption.