Speaker AI and Creative Industries | UK, China, Middle East | Lord Clement-Jones https://www.lordclementjones.org Sat, 08 Mar 2025 10:36:41 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.lordclementjones.org/wp-content/uploads/2018/09/cropped-lcj-icon-32x32.png Speaker AI and Creative Industries | UK, China, Middle East | Lord Clement-Jones https://www.lordclementjones.org 32 32 Lord C-J: We need more women in the STEM Workforce https://www.lordclementjones.org/2025/03/08/lord-c-j-we-need-more-women-in-the-stem-workforce/ Sat, 08 Mar 2025 10:36:41 +0000 https://www.lordclementjones.org/?p=76779
]]>
Lord C-J : We must put the highest duties on small risky sites https://www.lordclementjones.org/2025/03/01/lord-c-j-we-must-put-the-highest-duties-on-small-risky-sites/ Sat, 01 Mar 2025 18:41:14 +0000 https://www.lordclementjones.org/?p=76766

This is what I said in opening the debate. The motion passed against the government by 86 to 55.

What account did the Government and Ofcom take of the interaction and interrelations between small and large platforms, including the use of social priming through online “superhighways”, as evidenced in the Antisemitism Policy Trust’s latest report, which showed that cross-platform links are being weaponised to lead users from mainstream platforms to racist, violent and anti-Semitic content within just one or two clicks?

The solution lies in more than mere technical adjustments to categorisation thresholds; it demands a fundamental rethinking of how we assess and regulate online risk. A truly effective regulatory framework must consider both the size and the risk profile of platforms, ensuring that those capable of causing significant harm face appropriate scrutiny regardless of their user numbers and are not able to do so. Anything less—as many of us across the House believe, including on these Benches—would bring into question whether the Government’s commitment to online safety is genuine. The Government should act decisively to close these regulatory gaps before more harm occurs in our increasingly complex online landscape. I beg to move.

]]>
AI and Copyright Lord C-J ” The Government need to take this option off the table” https://www.lordclementjones.org/2025/03/01/ai-and-copyright-lord-c-j-the-government-need-to-take-this-option-off-the-table/ Sat, 01 Mar 2025 18:29:32 +0000 https://www.lordclementjones.org/?p=76754  

With huge thanks to Christian Gordon-Pullar for all his work here is our response to the Government’s consultation on IP and Copyright. We are clear that there is no lack of clarity in UK copyright law that should allow technology companies to scrape the internet and use copyright material for training their AI models  without any recompense to creators and that we need to introduce clear rules requiring transparency of use and a better enforcement mechanism for. breaches of copyright.

I and my Liberal Democrat colleagues fully support the major campaign by the media, artists and the creative industries to demand that the government  take their preferred  option, of a text and data mining exeception requiring an opt-out, off the table and make sure that they ensure that one of the most valuable sectors in the British economy survives and thrives alongside AI.

Here is a link to the Consultation 

https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence

And here is our response

Response to Consultation:  AI and Copyright on behalf of Lord Clement-Jones and Christian Gordon-Pullar 

  1. Context for Response to Consultation

Use of AI clearly offers significant opportunities across the broad canvas of the United Kingdom’s creative industries, and abroad.  Creators and associated creative businesses are using AI technology to support creativity, the process of content production or to help personalise content.  AI clearly has many creative uses, as Sir Paul McCartney has emphasised. It is one thing, however, to use the technology but another to be at the mercy of it.

The Government consultation[3] itself begins with the sentence :

“Two major strengths of the UK economy are its creative industries and AI sector. Both are essential to drive economic growth and deliver the government’s Plan for Change.”

We support the policy objectives within  the consultation and in particular, at a high level, the three objectives set out in the Government consultation, in relation to AI and Copyright, namely:

  1. Supporting right holders’ controlof their content and ability to be remunerated for its use.
  2. Supporting the development of world-leading AI models in the UKby ensuring wide and lawful access to high-quality data.
  3. Promoting greater trust and transparencybetween the sectors.

It is incumbent on any Government to find a true and fair balance for authors, musicians, artists and all creative content creators and owners, not just for foreign and domestic tech and AI companies and tech entrepreneurs, at the expense of the giants on whose creative and historical works their success relies and on whose shoulders their business and technology stands.

The Ministerial foreword reinforces this:

“This consultation sets out our plan to deliver a copyright and AI framework that rewards human creativity, incentivises innovation and provides the legal certainty required for long-term growth in both sectors.”

It is unclear and remains unexplained – in the Consultation – why the Government states:

“AI firms have raised concerns that the lack of clarity over how they can legally access training data creates legal risks, stunts AI innovation in the UK and holds back AI adoption”

It is entirely unclear where or what lack of clarity is being referenced?  There is currently clarity and certainty in the Copyright regime in the United Kingdom and additionally the UK recognises Computer Generated Works (See para 51 of the Consultation).  In relation to copyright and Intellectual Property (IP), under the current law in relation to content ingestion by AI developers, consent must be secured for the use of rightsholders’’ content.  The Consultation appears to be creating the distinct impression that copyright owners should be concerned and this is creating uncertainty.

The Consultation also states:

The creative industries drive our economy, including TV and film, advertising, the performing arts, music, publishing, and video games. They contribute £124.8 billion GVA to our economy annually, they employ many thousands of people, they help define our national identity and they fly the flag for our values across the globe. They are intrinsic to our success as a nation and the intellectual property they create is essential to our economic strength

It is unclear however if, and to what extent, the Government has carried out any serious investigation into the financial impact on the creative industries in the preparation of this Consultation, or since its publication. It is however clear that the impact will be significant and very likely greater than the proposed benefits of the data centres and investments offered by Big Tech.

The estimate of benefits to the UK economy use by the AI Opportunities Plan is built on shaky foundations,. It is derived from Google’s UK Economic Impact Report which highlighted that “AI-powered innovation could create over £400 billion in economic value for the UK economy by 2030. The £400 billion figure cited by Google comes from a report commissioned by Google and compiled by the consultancy firm Public First. This economic impact report was designed to analyse the potential effects of AI adoption on the UK economy by 2030.Public First conducted the research using several methods:

  • Polling of over 4,000 individuals across every region in the UK
  • Polling of 1,000 senior business leaders from small, medium, and large businesses across various industries
  • Traditional economic modelling to measure the economic activity driven by Google products.

The report estimates that AI-powered innovation could create over £400 billion in economic value for the UK economy by 2030, which is equivalent to an annual growth rate of 2.6%. 

This figure is based on projections of how AI technologies could boost productivity, create new job opportunities, and drive innovation across various sectors of the economy. It is important to note that this is a projection based on economic modelling and assumptions about future AI adoption and impact. As with any such forecast, it should be viewed as an estimate rather than a guaranteed outcome.

We remain convinced that the current copyright regime is clear and no evidence has been produced to warrant a new and more permissive exception regime to existing copyright laws in the United Kingdom.  It is our preferred option that the Government makes a clear statement that the use and/or ‘ingestion’ of content, without consent, to train an AI model capable of being used beyond non-commercial research, constitutes copyright infringement.

  1. Foreword/Summary

Questions surrounding the balance between copyright and data mining (text and data mining or TDM) is a major issue for content owners and creatives in the literary, musical and visual arts and not just in the UK, but around the world.

Getty and the New York Times are suing in the United States, so too many writers, artists and musicians and it was at the root of the Hollywood Actor and Writers strike last year .

Here in the United Kingdom, as the Government’s intentions have become clearer the temperature has risen. We have seen the creation of a new campaign -Creative Rights in AI Coalition (CRAIC) across the creative and news industries and Ed Newton-Rex six[4] raising over 30,000 signatories from creators and creative organisations. But with the current Consultation, we are now faced with a proposal regarding text and data mining exception which we thought was settled under the last Government. It starts from the false premise of legal uncertainty.

As the News Media Association says:

The government’s consultation is based on the mistaken idea—promoted by tech lobbyists and echoed in the consultation—that there is a lack of clarity in existing copyright law. This is completely untrue: the use of copyrighted content by Gen AI firms without a license is theft on a mass scale, and there is no objective case for a new text and data mining exception.

There is no lack of clarity over how AI developers can legally access training data. The applicable law in England and Wales is absolutely clear that commercial organisations – including Gen AI developers – must license the data they use to train their Large Language Models (“LLMs”).  Merely because AI platforms such as Stability AI  are resisting claims doesn’t mean the law in the UK is uncertain. There is no clear reason for – and no need for developers to – find ‘it difficult to navigate copyright law in the UK’.

AI developers have already, in a number of cases, reached agreement with between news publishers. OpenAI has signed deals with publishers like News Corp, Axel Springer, The Atlantic, and Reuters, offering annual payments between $1 million and $5 million, with News Corp’s deal reportedly worth $250 million over five years.

More recently, it is clear that the US fair use defence questions have not been settled despite the ruling in Thomson Reuters v. ROSS Intelligence, which involved Thomson Reuters suing ROSS Intelligence for using its copyrighted Westlaw headnotes to train an AI-powered legal research tool. On February 11, 2025, Judge Stephanos Bibas of the Delaware federal district court ruled against ROSS, rejecting its fair use defence and granting partial summary judgment in favour of Thomson Reuters. It is notable, however, that the court emphasised that ROSS’s use was commercial and non-transformative, as it created a competing product using the copyrighted material. This decision is significant as it sets a precedent for AI copyright cases, though it does not address generative AI specifically.

There can be no excuse of market failure. There are well established licensing solutions administered by a variety of well-established mechanisms and collecting societies. There should be no uncertainty around the existing law and the surrounding legal framework. We have some of the most effective collective rights organisations in the world.

The Consultation says that “The government believes that the best way to achieve these objectives is through a package of interventions that can balance the needs of the two sectors” The government appears to believe we need to achieve a balance between the creative industries and the tech industries. But the Consultation raises the fundamental question as to what kind of balance the government’s preferred option will deliver.

The government’s preferred option is to change the UK’s copyright framework by creating a text and data mining exception where rights holders have not expressly reserved their rights—in other words, an ‘opt-out’ system, where content is free to use unless a rights holder proactively withholds consent. To complement this, the government is proposing: (a) transparency provisions; and (b) provisions to ensure that rights reservation mechanisms are effective.

The government has stated that it will only move ahead with its preferred ‘rights reservation’ option if the transparency and rights reservation provisions are ‘effective, accessible, and widely adopted’. However, it will be up to Ministers to decide what provisions meet this standard, and it is clear that the government wishes to move ahead with this option regardless of workability, without knowing if their own standards for implementation can be met.

A few key overarching points to note:

  1. Although it is absolutely clear that that use of copyright works to train AI models is contrary to UK copyright law, the laws around transparency of these activities haven’t caught up. As well as using pirated e-books in their training data, AI developers scrape the internet for valuable professional journalism (even where such articles are protected by © Copyright notices and terms and conditions) and other media, in breach of both the terms of service of websites and copyright law, for use in training commercial AI models.
  2. At present, developers can do this without declaring their identity, or they may use IP scraped to appear in a search index for the completely different commercial purpose of training AI models.
  3. How can rights owners agree – in principle or in practice – to opt-out of something they don’t know full understand or even know about? AI developers will often scrape websites, or access other pirated material before they launch an LLM in public. This means there is no way for IP owners to opt-out of their material being taken before its inclusion in these models. Once used to train these models, the commercial value has already been extracted from the third party IP scraped, without permission, with no practical way to find or delete data from those models.
  4. The next wave of AI models responds to user queries by browsing the web to extract valuable news and information from professional news websites. This is known as Retrieval Augmented Generation-RAG. Without payment for extracting this commercial value, AI agents built by companies such as Perplexity, Google and Meta, will effectively free ride on the professional hard work of journalists, authors and creators. At present such crawlers are hard to block.

This is incredibly concerning, given that no effective ‘rights reservation’ system for the use of content by Gen AI models has been proposed or implemented anywhere in the world, making the government proposals entirely speculative.

As the NMA also say :

What the government is proposing is an incredibly unfair trade-off—giving the creative industries a vague commitment to transparency, whilst giving the rights of hundreds of thousands of creators to Gen AI firms. While creators are desperate for a solution after years of copyright theft by Gen AI firms, making a crime legal cannot be the solution to mass theft.[5]

We need transparency and clear statement about copyright. We absolutely should not expect artists to have to opt out. AI developers must: be transparent about the identity of their crawlers; be transparent about the purposes of their crawlers; and have separate crawlers for distinct purposes. Unless news publishers and the broader creative industries can retain control over their data – making UK copyright law enforceable – AI firms will be free to scrape the web without remunerating creators. This will not only reduce investment in trusted journalism, but it will ultimately harm innovation in the AI sector. If less and less human-authored IP is produced, tech developers will lack the high-quality data that is the essential fuel in generative AI.

Amending the applicable Law to address the challenges posed by AI development, particularly in relation to copyright and transparency, is essential to protect the rights of creators, foster responsible innovation, and ensure a sustainable future for the creative industries.

This should apply regardless of which country the scraping of copyright material takes place if developers market their product in the UK, regardless of where the training takes place.

It will also ensure that AI start-ups based in the UK are not put at a competitive disadvantage due to the ability of international firms to conduct training in a different jurisdiction. It is clear that AI developers have used their lobbying clout to persuade the government that a new exemption from copyright – in their favour – is required.

In response we will be vigorously opposing the preferred option for a new text and data mining exemption with an opt-out and will be seeking to ensure that the government answers the following key questions before proceeding further

  1. What led the government to do a u-turn on the previous government’s decision to drop the text and data mining exemption it proposed?
  2. What estimate of the damage to the creative industries it has made of implementing its clearly favoured option of a TDM plus opt out given there is no robust economic assessment currently in existence
  3. Is damaging the most successful UK economic sector for the benefit of US AI developers what it means by balance?
  4. Why it has not included the possibility of an opt in to a TDM in its consultation paper options?
  5. What examples of successful workable opt outs or rights reservation from TDM’s can it draw on particularly for small rights holders? What research has it done? the paper essentially admits that effective technology is not there yet. Isn’t it clear that the EU opt out system under the Copyright Directive has not delivered clarity?
  6. What regulatory mechanism if any does the government envisage if its proposal for a TDM with rights reservation/opt out is adopted? How are creators going to be sure any new system would work in the first place?

Detailed Response below

  1. Response to Consultation
  1. Copyright – Text and Data Mining

The three stated objectives in the Consultation[6] are set out in para / section 54 of the Consultation:

  1. Supporting right holders’ controlof their content and ability to be remunerated for its use.
  2. Supporting the development of world-leading AI models in the UKby ensuring wide and lawful access to high-quality data.
  3. Promoting greater trust and transparencybetween the sectors.

The Government rightly believe that there is a need to promote and further enable AI development. This must however be balanced with a commensurate and proportionate recognition of the critical importance and value of data as raw material.  AI developers rely on high-quality data to develop reliable and innovative AI-driven inventions and applications. Licensing regimes under existing IP law are designed to cater for the needs of AI developers.

By the same token content and data-driven businesses themselves have seen a rapid increase in the use of AI technology and machine-learning, either for news summaries, data gathering efforts, translations for research and journalistic purposes or to assist organisations to save time by processing large amounts of text and other data at scale and speed.   Digital technologies, including AI, are and will continue to be of critical importance to these industries, helping create content, new products and value-added services to deliver to a broad range of corporate and retail clients. Whether in news media or cross-industry research, publishers are themselves investing in AI; continued collaboration with start-ups and academia are creating tailored materials for wide populations of beneficiaries (students, academia, research organisations, and even marketers of consumer publishing products).

It is of paramount importance to balance the needs of future AI development with the legal, commercial and economic rights of copyright and data-owners and the need to incentivize new AI adoption with recognition of the rights of – and remuneration for – existing content owners.

We have however seen no evidence the existing copyright legislative framework fails to adequately address the current needs of AI developers. Moreover it is particularly important, in our view, to ensure that the development of AI is not enabled at the expense of the underlying investment by copyright and data-owners. (see endnote 1).

If the content owners of underlying data materials withhold the licensing of, or access to, such materials or attempt to price them at a level that is unfair, the answer is for Government via the Competition and Markets Authority/the new Digital Markets Unit (or indeed other regulators who form part of the Digital Regulation Cooperation Forum) to put in place competition measures to ensure there is a clear legal recourse in such situations.

In summary we do not believe that current copyright law creates a disparity between the interests of AI developers and investors and content owners. The existing copyright regime under the CDPA reflects a balance that fairly protects those investing in data creation without giving an unfair advantage to technology companies offering AI-enabled content creation services. In particular the current framework provides a balanced regime for data and text mining and we believe no changes are required at present.

At the very least, if AI Operators and providers must be able to demonstrate transparency and provide users and regulators with access to clear records of the inputs that the AI technology has used (e.g. sources of content includes copyrighted content), it will be impossible to satisfy the UK regime as well as basic international standards on cybersecurity standards, let alone copyright infringement or applicable parallel imports laws, to satisfy UK sovereignty principles.

In order, RESPONSES below.

Section C1

  1. Question 1.Do you agree that option 3 is most likely to meet the objectives set out above?

NO, we do not agree. 

  • Creating a more permissive system of copyright is unlikely to incentivise AI developers to obtain consent or license content from rightsholders.
  • AI developers have shown little appetite to license content at scale and there have been no signals, from what we have seen, that that position would change under any new regime. In the EU, which introduced a new Text and Data Mining (TDM) Exception with an Opt-Out (before the explosion in AI development) there has been no material increase in licensing of content, demonstrating that it is not the law which is preventing such licensing.
  • As currently drafted, the Consultation contains a new exception would also be available to all users, not simply AI developers for training. This would mean any user could copy works and reproduce them for commercial gain unless those rights were reserved. This presents the distinct opportunity for some unscrupulous users to deliberately look for works that are not rights reserved to exploit them commercially which is not possible under the existing copyright system
  1. Question 2.Which option do you prefer and why?

Ranking Options in order:

  • We would therefore urge the Government to elect Option 0 – Make no legal change.   No other option is currently justifiable given the lack of evidence of an adverse commercial environment preventing access to data or text by AI-enabled content creators. Should the Government or IPO consider that there needs to be increased access to data at lower cost, it should look at otherpolicy levers to stimulate such uptake, such as providing tax incentives for content owners to license content, rather than reducing copyright protection.
  • We also concur with industry leads who consider that forcing rightsholders to opt in to protection, or opt out of a data mining exception – as suggested in Option 3 – would be complicated and costly for many businesses and industries who own literally millions of works, when licensing is far simpler, and would be against the spirit of international treaties on copyright
  • Further such changes would impact the rights of copyright owners, as enshrined in Article 1 of the Human Rights Act . The Human Rights Act 1998 incorporates rights contained in the European Convention on Human Rights (ECHR) into UK national law. This means that they can be used to challenge the actions and decisions of governments and public bodies in the UK courts.  Under the UK Human Rights Act 1998, intellectual property rights are protected as part of the broader “right to property” enshrined in Article 1 of the First Protocol, meaning that public authorities cannot interfere with your intellectual property without a legitimate legal reason and in the public interest; this includes patents, trademarks, copyrights, and other forms of intellectual property you may own

Article 1 of the First Protocol states:.

“Every natural or legal person is entitled to the peaceful enjoyment of his possessions. No one shall be deprived of his possessions except in the public interest and subject to the conditions provided for by law and by the general principles of international law.

 

The preceding provisions shall not, however, in any way impair the right of a State to enforce such laws as it deems necessary to control the use of property in accordance with the general interest or to secure the payment of taxes or other contributions or penalties.”

Possessions include any tangible and intangible property

While the Act protects intellectual property, it does allow for limitations in the public interest, meaning that the government can restrict intellectual property rights under certain circumstances if it is deemed necessary for the greater good.  This is clearly for the benefit of tech and AI companies, not the greater good of content owners and creative industries across the fields of literary, musical and visual arts, inter alia.

Section C1 (cont.)

Question 3. Do you support the introduction of an exception along the lines outlined above?

RESPONSE: No, this is not necessary under UK law as the copyright owner already holds such rights, and such an exception would not be effective.

Absent a licence, or consent in writing, such rights to control his/her/its copyright are reserved for the copyright owner and no use of that copyright is permitted (except under existing non-commercial research exceptions for academic research, inter alia).  Any such unauthorised use would constitute copyright infringement.

Question 4. If so, what aspects do you consider to be the most important? If not, what other approach do you propose and how would that achieve the intended balance of objectives?

RESPONSE:  Only applicable if Option 3 is the  eventual outcome.  If such an approach for Option 3 were in fact the outcome at the end of the consultation, a presumption (as per existing UK law) should exist that no content is automatically permitted for TDM use by AI/Tech companies or other third parties, and that would be the case even in case where content available publicly or otherwise does not have a text of machine readable opt-out language.  The presumption must be in favour of the content and copyright owner (else risks creating costly litigation for SMEs and individuals who cannot reasonably be expected to allocate funds to litigate foreign and domestic tech companies and other well-funded tech start-ups seeking to use content without consent.

Any new exception would also have to be narrowly drafted to ensure it is limited to AI training, to ensure ill-intentioned users do not exploit the new system to reproduce works for commercial gain outside of the AI environment.

Question 5.  What influence, positive or negative, would the introduction of an exception along these lines have on you or your organisation? Please provide quantitative information where possible.

RESPONSE:  Any new exceptions would adversely impact creative industries both operationally and financially  – as seen from feedback and publications and statements made by the Performing Rights Society [7](PRS)[8], Anti-Copying in Design (ACID[9]) and others.  (See footnotes for references).

Content owners would have to spend time and money on legal advice, potentially, to:

  • Embed Metadata and Watermarks – Add metadata to digital files to indicate copyright ownership and usage restrictions. Watermarks could deter unauthorised use if a robust and easily useable form was readily available. Embedding metadata could be relatively simple and could be done using file properties, specialised software or programming methods (e.g., EXIF for images, or custom fields in JSON or XML) See Appendix 1
  • Monitor and Enforce Their Rights
    Content owners would have to regularly check for unauthorised use of their copyright work online. If an owner identifies infringements, they would need to contact the offending party to request removal or seek legal advice. However, identifying the offending party remains a significant challenge without a proper system in place in terms of transparency requirements..

For example, A photographer would have to retrospectively opt-out thousands of individual works to gain protection which is currently automatic, time that they can ill-afford to spend which detracts from their valuable time, better spent generating new revenue-generating copyright-protected works. Legal costs would like increase – to challenge infringement – but under a new regime there would have to be a dual track for action, one under the new regime and another under the existing regime, potentially doubling legal costs.

Question 6. What action should a developer take when a reservation has been applied to a copy of a work?

RESPONSE:  The developer must seek consent and pay for the content before training AI or technology systems on the content and without such consent would not / should not train its AI or technology on such content .  This applies equally today under the existing law – and most companies ignore such rights because they are not enforced and the consequences are too financially burdensome for content owners – hence the rights should be bolstered not diluted.

Question 7. What should be the legal consequences if a reservation is ignored?

RESPONSE:  Any new system for rights reservation must have the same legal standing as Technical Protection Measures.  That is sub-optimal in any event.  We propose that a statutory strict liability should be imposed and a presumption of copyright infringement should apply in case where use is without consent/licence.

Question 8. Do you agree that rights should be reserved in machine-readable formats? Where possible, please indicate what you anticipate the cost of introducing and/or complying with a rights reservation in machine-readable format would be.

RESPONSE:  No:  any such system should be sufficiently flexible to enable different content owners to opt out for types of works. While machine readable formats would most likely be required, these must be simple and low cost enough for all rightsholders to access; without this, such measures place the burden on the content owners to spend money to defend copyright and IP protection, rights that are fundamentally embodied in existing law and rights already held under the Human Rights Act 1998. 

Section C2: Technical Standards

Question 9. Is there a need for greater standardisation of rights reservation protocols?

RESPONSE:   If required at all, standardisation of protocols and standards for such protocols would seem helpful.

Question 10. How can compliance with standards be encouraged?

RESPONSE:  Infringement or breach of any such protocols would need to be clearly stated to constitute copyright infringement with deterrents in place to create a compliant legislative regime.  In the absence of such protocols, a statutory strict liability should be imposed or a presumption of copyright infringement should apply

Question 11. Should the government have a role in ensuring this and, if so, what should that be?

RESPONSE:  Establish a Government regulator or unit to enforce such rights, to be paid for by the tech industry – which is demanding additional rights, which derogate from the rights of copyright and IP owners, which already exist under existing UK Copyright legislation and under the Human Rights Act 1998.

Section C3 – Licensing and contracts

Question 12. Does current practice relating to the licensing of copyright works for AI training meet the needs of creators and performers?

RESPONSE:  Currently the licensing regime does not expressly address licensing for AI training but if AI training entities should apply the existing legal principles under the existing Law and therefore actually check copyright notices and apply for licensing /consent where no other approach is available.

Question 13. Where possible, please indicate the revenue/cost that you or your organisation receives/pays per year for this licensing under current practice.

RESPONSE:  n/a from the authors

Question 14. Should measures be introduced to support good licensing practice?

RESPONSE: There is no presumption that commercial AI training or use of inputs is permitted under UK copyright law and rights-management societies and professional bodies including PRS and other licensing organisations already provide for such good licensing practices and may therefore need to update those for use by AI etc –

See https://www.prsformusic.com/  and also https://www.gov.uk/licence-to-play-live-or-recorded-music  and ICO for film licensing – at https://www.independentcinemaoffice.org.uk/advice-support/what-licences-do-i-need/film-copyright-licensing/ and ICMP for Contemporary Music https://www.icmp.ac.uk/blog/understanding-music-copyrights-and-licenses

Question 15. Should the government have a role in encouraging collective licensing and/or data aggregation services? If so, what role should it play?

RESPONSE:  No – this should be left to professional collection societies and licensing bodies authorised by each industry but the Government could, as an alternative to the preferred approach of robust enforcement, assist content owners by making any unauthorised use enforceable as a statutory liability, or create a presumption of infringement if that is not already clear (it seems clear to the authors)

Question 16. Are you aware of any individuals or bodies with specific licensing needs that should be taken into account?

RESPONSE:  n/a

Section C4 – Transparency

Question 17. Do you agree that AI developers should disclose the sources of their training material?

RESPONSE  YES.  Transparency is vital to the AI eco-system.  We advocate for transparency, by which we intend that AI developers must maintain records of the individual works that their AI systems etc. have ingested at a granular level.

Question 18. If so, what level of granularity is sufficient and necessary for AI firms when providing transparency over the inputs to generative models?

RESPONSE :  As with current Law – the source, author and detail of data / content used and whether it is used under licence or not.  Granularity is crucial – a general statement would not be sufficient to protect the principles of transparency nor to protect creator’s rights under the Law.

Question 19. What transparency should be required in relation to web crawlers?

RESPONSE:  We should retain the amendments to the Data Use and Access Bill in this respect proposed by Baroness Kidron and passed by the House of Lords on the 28th of January 2025 which provide inter alia for regulations to require disclosure by AI models of

  • the name of the crawler,
  • the legal entity responsible for the crawler,
  • the specific purposes for which each crawler is used,
  • the legal entities to which operators provide data scraped by the crawlers they operate, and
  • a single point of contact to enable copyright owners to communicate with them and to lodge complaints about the use of their copyrighted works.
  • the URLs accessed by crawlers deployed by them or by third parties on their behalf or from whom they have obtained text or data,
  • the text and data used for the pre-training, training and fine-tuning, including the type and provenance of the text and data and the means by which it was obtained,
  • information that can be used to identify individual works, and
  • the timeframe of data collection.

Question 20.What is a proportionate approach to ensuring appropriate transparency?

RESPONSE:  Unclear but it must at least involve an equal or greater effort by AI and tech developers using AI to scrape content as is being considered for content owners who have to add tech measures to their content e.g. watermarks etc and notices in machine readable format for opt outs and/or further technical, legal and operational costs to craft disclaimers or text for assertion of their (already existing) rights.

Question 21. Where possible, please indicate what you anticipate the costs of introducing transparency measures on AI developers would be.

RESPONSE: Unclear at this stage but perhaps the Government can broker – as part of its incentive deals– a framework to resolve past copyright infringement issues, to obviate the need for class actions by creative content owners or individuals, a one-off settlement/payment for past copyright infringement

Question 22. How can compliance with transparency requirements be encouraged, and does this require regulatory underpinning?

RESPONSE:  If Option 3 is adopted then it must be a condition for tech developers and AI companies, at least, to take all reasonable operational measures to ensure that copyright and content is licensed or its input and output use is authorised (under license or written consent), such efforts to be at least equal or greater than the efforts being likely considered for content owners (who have to add tech measures to their content e.g. watermarks etc and notices in machine readable format for opt outs and/or further technical, legal and operational costs to craft disclaimers or text for assertion of their (already existing) rights)

Question 23. What are your views on the EU’s approach to transparency?

RESPONSE:  It is very questionable, to say the least, how effective or workable the Working Groups implementing the EU AI ACT have found the opt out provisions; in the meantime, the transparency provisions is a clear benchmark for the UK and it should take note, given that until recently UK was bound by such rules.  The law in UK should at least equally protect UK citizens and content and creative owners – – but not impose unworkable opt out mechanisms based on an as-yet-untested EU comparison – to promote consistency and to avoid a mass migration of creatives.

Section C5 : Clarification of Copyright Law

Question  24. What steps can the government take to encourage AI developers to train their models in the UK and in accordance with UK law to ensure that the rights of right holders are respected?

RESPONSE:   See above responses to Q20 and Q22 – and reiterated here.  A statutory strict liability should be imposed or a presumption of copyright infringement should apply, failing which, the Government should make a clear statement, in the form of a Copyright Notice, that the current exception regime does not allow for the use of works, covered by copyright, for commercial purposes, without the consent of the owner of those works.

Section C6

Question  25. To what extent does the copyright status of AI models trained outside the UK require clarification to ensure fairness for AI developers and right holders?

RESPONSE:  If an AI company has trained its AI on content that is covered by copyright in the United Kingdom, then making the output or service provided by that company in the United Kingdom would still constitute copyright infringement.

At the very least, if AI Operators and providers are unable to demonstrate transparency and provide users and regulators with access to clear records of the inputs that the AI technology has used (e.g. sources of content includes copyrighted content), it will be impossible to satisfy the UK regime as well as basic international standards on cybersecurity standards, let alone copyright infringement or applicable parallel imports laws, in order to satisfy UK sovereignty principles.

Question 26. Does the temporary copies exception require clarification in relation to AI training?

RESPONSE:  No, this is no defence; it is also no different to existing approach taken by any computer (an AI is just a software programme and no different to existing technologies, for now)

Question 27. If so, how could this be done in a way that does not undermine the intended purpose of this exception?

RESPONSE:  We are not in favour of any exception but if such an exception were to be considered, then clear guardrails would need to be implemented – to ensure that any such temporary copies create no economic value or advantage.

Section C6 – Encouraging Research and Innovation

Question  28. Does the existing data mining exception for non-commercial research remain fit for purpose?

RESPONSE:  YES, it is sufficient and fit for purpose, as it currently stands[10]   The Exception received significant Parliamentary scrutiny before being implemented in 2014 and we believe any reform would significantly change the careful balance agreed upon then. Any such reform of the Exception would require significant and separate analysis, as opposed to being mixed in with this consultation.

Question 29. Should copyright rules relating to AI consider factors such as the purpose of an AI model, or the size of an AI firm?

RESPONSE:  No.  All such instances and use of copyright content are still governed by the existing UK Copyright legislation and the size of purpose of the firm is irrelevant (unless perhaps it is a true charity not a charitable front designed by and for a commercial purpose).

Section D – Computer-generated works: protection for the outputs of generative AI

Option 0: No legal change, maintain the current provisions

RESPONSE:  Maintain the status quo.

  • Computer Generated Works (CGWs) distinguish the UK from other countries and prevents the argument that AI needs to ‘own’ IP outside of the existence of a ‘human author’ for creativity – it does not. AI is a tool in the hands of a company or individual.
  • CGWs protection is necessary to encourage the production of outputs by generative AI or other tools, and any legal ambiguity is likely to be resolved or of little effect. The Courts will resolve any ambiguity as they have done in England and Wales for centuries.
  • The exception in s9(3) CDPA works. “If a work is computer-generated – that is, not authored by a human – then copyright ought to be vested in the person who made the ‘arrangements necessary for the creation of work”
  • AI does not require or deserve any special rights or considerations and such rights are adequately covered in the relevant S.9(3) of the CDPA: .

Section D2 – Outputs

Question 30. Are you in favour of maintaining current protection for computer-generated works? If yes, please explain whether and how you currently rely on this provision.

RESPONSE :  YES:  See above re Computer Generated Works expressly that these distinguish the UK from other countries where such a regime does not exist.

Question 31. Do you have views on how the provision should be interpreted?

RESPONSE:  It has been clearly interpreted in case law. The Advocate General in Painer[11] took this view, noting that only human creations can be copyright- protected (although the human can employ a “technical aid” like a camera). A similar position has also been taken by the U.S. Copyright Office, which determined that images created using the generative AI model, Midjourney, were not original works of authorship protected by U.S. copyright law because this excludes works produced by non-humans[12]. Caselaw from other countries also reflects this understanding[13].  It is right and proper that the facts of each case should determine the outcome, as was Parliament’s intention[14]. 

RESPONSE:  No changes to CGWs are required

Question 32. Would computer-generated works legislation benefit from greater legal clarity, for example to clarify the originality requirement? If so, how should it be clarified?

RESPONSE: No.

Question 33. Should other changes be made to the scope of computer-generated protection?

RESPONSE:  No

Question 34. Would reforming the computer-generated works provision have an impact on you or your organisation? If so, how? Please provide quantitative information where possible.

RESPONSE:  unknown until details are provided of what the changes would be in a legislative context and the authors consider this unnecessary

Question 35. Are you in favour of removing copyright protection for computer-generated works without a human author?

RESPONSE:  NO, for reasons given above.  UK is fortunate to have a CGW right which is absent in many legislative frameworks

Question 36. What would be the economic impact of doing this? Please provide quantitative information where possible.

RESPONSE:  Unknown at yet

Question 37. Would the removal of the current CGW provision affect you or your organisation? Please provide quantitative information where possible.

RESPONSE:  Almost certainly given the licensing arrangements and revenue based on existing legislation. Quantum unknown.

Section D4

Question 38.  Does the current approach to liability in AI-generated outputs allow effective enforcement of copyright?

RESPONSE:  The law is clear in relation to AI-generated outputs.  If a service is being provided in the UK which has been trained on the use of UK material, without permission, then the service is infringing and operating illegally. The enforcement of the law is clearly challenging given the lack of transparency by AI developers of the works they have used to train their models and for what purpose. See above proposals on strict liability regime for AI companies infringing copyright and alternative enforcement mechanisms mentioned in previous responses, above.

Question 39.  What steps should AI providers take to avoid copyright infringing outputs?

RESPONSE:  comply with the law –

  • check copyright notices (which is easy with AI tools) and
  • obtain consent under licence or written permission to use substantial elements of content in which copyright subsists and is claimed and/or owned by a third party under a simple © Notice.

Section D5 – AI Output Labelling

Question 40. Do you agree that generative AI outputs should be labelled as AI generated? If so, what is a proportionate approach, and is regulation required?

RESPONSE:  YES and YES

Question 41. How can government support development of emerging tools and standards, reflecting the technical challenges associated with labelling tools?

RESPONSE:  Unclear,  the labelling is easy with AI and tech tools

Question 42. What are your views on the EU’s approach to AI output labelling?

RESPONSE:  n/a  No comment.  The EU AI Act, formally adopted by the EU in March 2024, requires providers of AI systems to mark their output as AI-generated content. This labelling requirement is meant to allow users to detect when they are interacting with content generated by AI systems to address concerns like deepfakes and misinformation. Unfortunately, implementing one of the AI Act’s suggested methods for meeting this requirement—watermarking—may not be feasible or effective for some types of media. As the EU’s AI Office begins to enforce the AI Act’s requirements, the Government should closely evaluate the practicalities of AI watermarking.

Section D6: Digital Replicas and other issues

Question 43. To what extent would the approach(es) outlined in the first part of this consultation, in relation to transparency and text and data mining, provide individuals with sufficient control over the use of their image and voice in AI outputs?

RESPONSE:  This is an important area that requires a more detailed review of the effectiveness of UK laws.  Moral rights and personality image rights such as exist in EU would help protect individuals to have adequate control over their image/reputation and performance.  This is an area that needs further review and potentially, legislation.  Ratification of international treaties on this topic such as the Beijing Treaty would be an important first step towards international cooperation on standards and enforcement frameworks.

There are significant limits on the control people have over their image and voice in the UK.  To the extent image (or personality) rights are protected at all, it is via a mix of privacy law, data protection, contract law, moral rights and the common law tort of ‘passing off’.  The approaches outlined in the first part of the consultation do not materially improve individuals’ position in relation to use of their image and voice in AI outputs. It is directed to the use of copyright works. It does not follow that a copyright work is directly probative of a person’s image and/or voice. Further, it does not follow that the owner of that copyright work is the person in question.

Question 44. Could you share your experience or evidence of AI and digital replicas to date?

RESPONSE:  The ability of digital replicas in real time can cause and have caused irreparable damage to many including people we know who have been fooled by sophisticated AI scams and with real-time artificial intelligence replicas of real people, actors well then personalities and even family members, easily cloned from information available on social media and images shared on the Internet, can cause irreparable damage to individuals who may be ill prepared or ill-equipped to address these – and those in the public arena (including actors and artists or politicians, even) may suffer financial harm as well as reputational damage.

There have also been examples of deepfake videos of politicians in recent times in the UK- for example of Sadiq Khan and Sir Keir Starmer.  A change in the law to explicitly cover acts like these, rather than leaving recourse only to adjacent rights such as defamation or passing off would, in our view, be advisable.

Section D7 – Emerging Issues

Question 45. Is the legal framework that applies to AI products that interact with copyright works at the point of inference clear? If it is not, what could the government do to make it clearer?

RESPONSE:  No comment – question unclear

Question 46. What are the implications of the use of synthetic data to train AI models and how could this develop over time, and how should the government respond?

RESPONSE:  It is likely the outputs and quality of AI tools trained on synthetic data models will be degraded as compared to original/real data models

Question  47. What other developments are driving emerging questions for the UK’s copyright framework, and how should the government respond to them?

RESPONSE:  None, at present. 

Section E

  1. End notes
  • Lord Clement-Jones CBE[15] is a Liberal Democrat Life peer and the Liberal Democrat DSIT Spokesperson in the House of Lords, and inter alia, the Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence. He was chair of the House of Lords Select Committee on Artificial Intelligence (2017–2018) and is a former member of the Select Committees on Communications and Digital (2011–2015)) as well as a former Lib Dem Lords spokesperson on the Creative Industries (2004-10). He is an officer and active member of the All-Party Parliamentary Group on Intellectual Property.
  • Christian Gordon-Pullar is an IP specialist and an experienced intellectual asset manager with more than 30 years’ experience, ranked in the IAM Top 300 Global IP Strategists in 2020- 2024 (inclusive). He has a proven track record in IP in the fields of financial services, pharmaceuticals and life sciences, fintech and e-commerce, working at a C level with venture capital and private equity firms across portfolios. Until August 2024, Christian was Chairman of Fox Robotics Ltd, a UK Agritech AI start up.  He has led IP licensing efforts in multinationals across Europe and Asia. Based in Singapore from 2001 to 2019, he also has significant Asia experience where he was head of Tech, Intellectual Property and Corporate Functions Legal, AsiaPac at JPMorgan. Before that, he was global head of intellectual property at Standard Chartered Bank and CEO of Standard Chartered’s global IP licensing entity. [16]  Christian was formerly a solicitor in the IP Group (TMT) at Lovell White Durrant, now Hogan Lovells, from 1993-1999.
  1. Consent. The individuals named above would be agreeable to being contacted by the Intellectual Property Office (UK IPO) in relation to this consultation.

APPENDICES  

  1. Watermarking

Watermarking of copyright content for LLMs is an active area of research and discussion, with several approaches being explored to address copyright concerns in AI training and generation. While watermarking shows promise, its practicality for preventing copyright theft is still strongly debated.

  • Embedding Watermarks: Researchers have proposed methods to implant backdoors on embeddings, such as the Embedding Watermark method3. This technique aims to protect the copyright of LLMs used for Embedding as a Service (EaaS) by inserting watermarks into the embeddings of texts containing trigger words.
  • Output Watermarking: Some techniques focus on watermarking the text generated by LLMs. These methods can significantly reduce the probability of generating copyrighted content, potentially by tens of orders of magnitude4.
  • Model-Level Watermarking: A novel approach involves embedding signals directly into LLM weights, which can be detected by a paired detector. This method allows for watermarked model open-sourcing and can be more adaptable to new attacks.
  • Reinforcement Learning-Based Watermarking: A co-training framework using reinforcement learning has been proposed to iteratively train a detector and tune the LLM to generate easily detectable watermarked text while maintaining normal utility[17].

While watermarking shows potential, several factors affect its practicality in preventing copyright theft:

  1. Effectiveness: Some studies demonstrate that watermarking can significantly reduce the likelihood of generating copyrighted content4. However, the effectiveness varies depending on the specific method and implementation.
  2. Detection Challenges: Detecting watermarks in fully black-box models remains difficult. Some methods, like DE-COP, have shown promise in detecting copyrighted content in training data, even for black-box models6.
  3. Trade-offs: There’s an inherent trade-off between watermark transparency and effectiveness. Increased transparency may make watermarks more detectable and modifiable9.
  4. Implementation Constraints: Watermarking during the LLM training phase cannot be applied to already trained models, limiting its applicability to existing LLMs[18].
  5. Legal and Ethical Considerations: The use of copyrighted material in training datasets remains a contentious issue, with ongoing legal debates and lawsuits.

In conclusion, while watermarking techniques for LLMs are advancing rapidly, their practicality in preventing copyright theft is still uncertain. These methods show promise in reducing the generation of copyrighted content and potentially tracking its use, but challenges remain in implementation, detection, and legal frameworks. As the field evolves, a combination of technical solutions, legal guidelines, and ethical considerations will likely be necessary to address copyright concerns in AI effectively.

  1. EU Transparency requirements

The EU AI Act requires a “sufficiently detailed summary” of training data for General-Purpose AI (GPAI) models to ensure transparency and protect stakeholders’ rights, such as copyright holders. The required level of granularity includes:

  1. Data Sources and Types: Providers must disclose the origins of datasets (e.g., public or private databases, web data, user-generated content) and specify the types of data used (e.g., text, images, audio) across all training stages, from pre-training to fine-tuning.
  2. Content Description: Summaries must detail dataset size, filtering processes (e.g., removal of harmful content), augmentation methods, and whether copyrighted or personal data is included. This also involves specifying licensing terms for the data.
  3. Narrative Explanations: Clear, non-technical descriptions must accompany technical details to ensure accessibility for both experts and laypersons.

This level of detail is designed to balance transparency with the protection of trade secrets while enabling stakeholders to exercise their rights effectively

[1] See Section C for details.

[2] See Section C for details.

[3] https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence

[4] https://ed.newtonrex.com/

[5] https://www.lordclementjones.org/2024/12/21/governments-ai-copyright-consultation-is-selling-out-to-the-techbros/

[6] https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence

[7] https://www.prsformusic.com/m-magazine/news/prs-for-music-announces-ai-principles

[8] https://www.prsformusic.com/press/2024/creative-rights-in-ai-coalition-calls-on-government-to-protect-copyright

[9] https://m.facebook.com/100063658326152/photos/1084206480377953/

[10] The Post Implementation Review Process, published in 2020 found (in relation to the series of exceptions introduced in 2014), the review has not identified any improvements in the assumptions which would change the original assessment. Based on the largely positive responses from the call for evidence that the original objectives remain valid, and evidence to suggest the exceptions are operating as intended, we find that it would therefore be appropriate for the exceptions to remain in their current form.  See https://www.legislation.gov.uk/uksi/2014/1372/pdfs/uksiod_20141372_en_002.pdf

[11] Eva-Maria Painer v Standard Verlags GmbH (C-145/10) C:2011:798 at [89]–[94] at [121]

[12] Second Request for Reconsideration for Refusal to Register Théâtre D’opéra Spatial (Copyright Review Board September 5, 2023). U.S. Copyright Office, Library of Congress. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 16 March 2023 88 FR 16190.

[13] Australia: it is necessary to identify a human author in order for there to be an original literary work (Telstra Corporation Limited v Phone Directories Company Pty Ltd (2010) FCA 44); Singapore: copyright only arises when a work is created by a human author (Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd [2011] SGCA 37

[14] Bently et al, Intellectual Property Law, 6th Edn at [138].

  1. UK Intellectual Property Office, “Consultation outcome—Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultation” (GOV.UK, updated 28 June 2022)

[15] https://www.libdems.org.uk/tim_clement_jones

[16] https://www.iam-media.com/strategy300/individuals/christian-gordon-pullar

[17] https://openreview.net/forum?id=r6aX67YhD9

[18] https://arxiv.org/html/2501.02446v1

]]>
Government’s AI Copyright Consultation is Selling out to the Techbros https://www.lordclementjones.org/2024/12/21/governments-ai-copyright-consultation-is-selling-out-to-the-techbros/ Sat, 21 Dec 2024 17:27:08 +0000 https://www.lordclementjones.org/?p=76738 We have recently seen the publication of the Government’s Copyright and AI Consultation paper. This my take on it.

I co-chair the All Party Parliamentary Group for AI and chaired the AI select Committee committee and wrote a book earlier this year on AI regulation. Before that I had a career as an lawyer defending copyright and creativity and in the House of Lords, I’ve have been my Party’s creative industry spokesperson. The question of IP and AI absolutely for me is the key issue which has arisen in relation to Generative AI models. It is one thing to use tech, another to be at the mercy of it.

It is a major issue not just in the UK, but around the world. Getty and the New York Times are suing in the United States, so too many writers, artists and musicians and it was at the root of the Hollywood Actor and Writers strike last year .

Here in the UK, as the Government’s intentions have become clearer the temperature has risen. We have seen the creation of a new campaign -Creative Rights in AI Coalition (CRAIC) across the creative and news industries and Ed Newton-Rex raising over 30,000 signatories from creators and creative organisations.

But with the new government consultation which came out a few days ago we are now faced with a proposal regarding text and data mining exception which we thought was settled under the last Government. It starts from the false premise of legal uncertainty.

As the News Media Association say:

The government’s consultation is based on the mistaken idea—promoted by tech lobbyists and echoed in the consultation—that there is a lack of clarity in existing copyright law. This is completely untrue: the use of copyrighted content by Gen AI firms without a license is theft on a mass scale, and there is no objective case for a new text and data mining exception.

There is no lack of clarity over how AI developers can legally access training data. UK law is absolutely clear that commercial organisations – including Gen AI developers – must license the data they use to train their Large Language Models (“LLMs”).

Merely because AI platforms such as Stability AI  are resisting claims doesn’t mean the law in the UK is uncertain. There is no need for developers to find ‘it difficult to navigate copyright law in the UK’.

AI developers have already in a number of cases reached agreement with between news publishers. OpenAI has signed deals with publishers like News Corp, Axel Springer, The Atlantic, and Reuters, offering annual payments between $1 million and $5 million, with News Corp’s deal reportedly worth $250 million over five years.

There can be no excuse of market failure. There are well established licensing solutions administered by a variety of well-established mechanisms and collecting societies. There should be no uncertainty around the existing law. We have some of the most effective collective rights organisations in the world. Licensing is their bread and butter.

The Consultation paper says that “The government believes that the best way to achieve these objectives is through a package of interventions that can balance the needs of the two sectors” Ministers Lord Vallance, and Feryal Clark MP seem to think we need a balance between the creative industries and the tech industries. But what kind of balance is this?

The government is proposing to change the UK’s copyright framework by creating a text and data mining exception where rights holders have not expressly reserved their rights—in other words, an ‘opt-out’ system, where content is free to use unless a rights holder proactively withholds consent. To complement this, the government is proposing: (a) transparency provisions; and (b) provisions to ensure that rights reservation mechanisms are effective.

The government has stated that it will only move ahead with its preferred ‘rights reservation’ option if the transparency and rights reservation provisions are ‘effective, accessible, and widely adopted’. However, it will be up to Ministers to decide what provisions meet this standard, and it is clear that the government wishes to move ahead with this option regardless of workability, without knowing if their own standards for implementation can be met.

Although it is absolutely clear that we know that use of copyright works to train AI models is contrary to UK copyright law, the laws around transparency of these activities haven’t caught up. As well as using pirated e-books in their training data, AI developers scrape the internet for valuable professional journalism and other media in breach of both the terms of service of websites and copyright law, for use in training commercial AI models.

At present, developers can do this without declaring their identity, or they may use IP scraped to appear in a search index for the completely different commercial purpose of training AI models.

How can rights owners opt-out of something they don’t know about? AI developers will often scrape websites, or access other pirated material before they launch an LLM in public. This means there is no way for IP owners to opt-out of their material being taken before its inclusion in these models. And once used to train these models, the commercial value has already been extracted from IP scraped without permission with no way to delete data from those models.

The next wave of AI models responds to user queries by browsing the web to extract valuable news and information from professional news websites. This is known as Retrieval Augmented Generation-RAG. Without payment for extracting this commercial value, AI agents built by companies such as Perplexity, Google and Meta, will effectively free ride on the professional hard work of journalists, authors and creators. At present such crawlers are hard to block.

This is incredibly concerning, given that no effective ‘rights reservation’ system for the use of content by Gen AI models has been proposed or implemented anywhere in the world, making the government proposals entirely speculative.

As the NMA also say What the government is proposing is an incredibly unfair trade-off—giving the creative industries a vague commitment to transparency, whilst giving the rights of hundreds of thousands of creators to Gen AI firms. While creators are desperate for a solution after years of copyright theft by Gen AI firms, making a crime legal cannot be the solution to mass theft.

We need transparency and clear statement about copyright. We absolutely should not expect artists to have to opt out. AI developers must: be transparent about the identity of their crawlers; be transparent about the purposes of their crawlers; and have separate crawlers for distinct purposes.

Unless news publishers and the broader creative industries can retain control over their data – making UK copyright law enforceable – AI firms will be free to scrape the web without remunerating creators. This will not only reduce investment in trusted journalism, but it will ultimately harm innovation in the AI sector. If less and less human-authored IP is produced, tech developers will lack the high-quality data that is the essential fuel in generative AI.

Amending UK law to address the challenges posed by AI development, particularly in relation to copyright and transparency, is essential to protect the rights of creators, foster responsible innovation, and ensure a sustainable future for the creative industries.

This should apply regardless of which country the scraping of copyright material takes place if developers market their product in the UK, regardless of where the training takes place.

It will also ensure that AI start-ups based in the UK are not put at a competitive disadvantage due to the ability of international firms to conduct training in a different jurisdiction

It is clear that AI developers have used their lobbying clout to persuade the government that a new exemption from copyright in their favour is required. As a result, the government seem to have sold out to the tech bros.

In response the creative industries and supporters such as myself will be vigorously opposing government plans for a new text and data mining exemption and ensuring we get answers to our questions:

What led the government to do a u-turn on the previous government’s decision to drop the text and data mining exemption it proposed?

What estimate of the damage to the creative industries it has made of implementing its clearly favoured option of a TDM plus opt out?

Is damaging the most successful UK economic sector for the benefit of US AI developers what it means by balance?

Why it has not included the possibility of an opt in to a TDM in its consultation paper options?

What is the difference between rights reservation and opting out? Isn’t this pure semantics?

What examples of successful workable opt outs or rights reservation from TDM’s can it draw on particularly for small rights holders? What research has it done? the paper essentially admits that effective technology is not there yet. Isn’t it clear that the EU opt out system under the Copyright Directive has not delivered clarity?

What regulatory mechanism if any does the government envisage if its proposal for a TDM with rights reservation/opt out is adopted? How are creators going to be sure any new system would work in the first place?

 

 

 

 

 

 

 

 

]]>
We Need Better Protection for Citizens in the Face of Automated Decision Making https://www.lordclementjones.org/2024/12/21/we-need-better-protection-for-citizens-in-the-face-of-automated-decision-making/ Sat, 21 Dec 2024 16:39:44 +0000 https://www.lordclementjones.org/?p=76723 The second Reading of my Private Members Bill tool place recently. It is designed to give greater rights to all of us who are subject to AI and Automated decision making in government which is becoming increasingly prevalent with the enthusiasm of the new Labour government to “digitally transform” our public services.

 I thank Big Brother Watch, the Public Law Project and the Ada Lovelace Institute, which, each in their own way, have provided the evidence and underpinned my resolve to ensure that we regulate the adoption of algorithmic and AI tools in the public sector, which are increasingly being used across it to make and support many of the highest-impact decisions affecting individuals, families and communities across healthcare, welfare, education, policing, immigration and many other sensitive areas of an individual’s life. I also thank the Public Bill Office, the Library and other members of staff for all their assistance in bringing this Bill forward and communicating its intent and contents, and I thank all noble Lords who have taken the trouble to come to take part in this debate this afternoon.

The speed and volume of decision-making that new technologies will deliver is unprecedented. They have the potential to offer significant benefits, including improved efficiency and cost effectiveness in government operations, enhanced service delivery and resource allocation, better prediction and support for vulnerable people and increased transparency in public engagement. However, the rapid adoption of AI in the public sector also presents significant risks and challenges, with the potential for unfairness, discrimination and misuse through algorithmic bias and the need for human oversight, a lack of transparency and accountability in automated decision-making processes and privacy and data protection concerns.

Incidents such as the 2020 A-level and GCSE grading fiasco, where an algorithmic approach saw students, particularly those from lower-income areas, unfairly miss out on university places when an algorithm was used to estimate grades from exams that were cancelled because of Covid-19, have starkly illustrated the dangers of unchecked algorithmic systems in public administration disproportionately affecting those from lower-income backgrounds. That led to widespread public outcry and a loss of trust in government use of technology.

Big Brother Watch’s investigations have revealed that councils across the UK are conducting mass profiling and citizen scoring of welfare and social care recipients. Its report, entitled Poverty Panopticon [The Hidden Algorithms Shaping Britains Welfare State], uncovered alarming statistics. Some 540,000 benefits applicants are secretly assigned fraud risk scores by councils’ algorithms before accessing housing benefit or council tax support. Personal data from 1.6 million people living in social housing is processed by commercial algorithms to predict rent non-payers. Over 250,000 people’s data is processed by secretive automated tools to predict the likelihood of abuse, homelessness or unemployment.

Big Brother Watch criticises the nature of these algorithms, stating that most are secretive, unevidenced, incredibly invasive and likely discriminatory. It argues that these tools are being used without residents’ knowledge, effectively creating tools of automated suspicion. The organisation rightly expressed deep concern that these risk-scoring algorithms could be disadvantaging and discriminating against Britain’s poor. It warns of potential violations of privacy and equality rights, drawing parallels to controversial systems like the Metropolitan Police’s gangs matrix database, which was found to be operating unlawfully. From a series of freedom of information requests last June, Big Brother Watch found that a flawed DWP algorithm wrongly flagged 200,000 housing benefit claimants for possible fraud and error, which meant that thousands of UK households every month had their housing benefit claims unnecessarily investigated.

In August 2020, the Home Office agreed to stop using an algorithm to help sort visa applications after it was discovered that the algorithm contained entrenched racism and bias, and following a challenge from the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove. The algorithm essentially created a three-tier system for immigration, with a speedy boarding lane for white people from the countries most favoured by the system. Privacy International has raised concerns about the Home Office’s use of a current tool called Identify and Prioritise Immigration Cases—IPIC—which uses personal data, including biometric and criminal records to prioritise deportation cases, arguing that it lacks transparency and may encourage officials to accept recommended decisions without proper scrutiny.

Automated decision-making has been proven to lead to harms in privacy and equality contexts, such as in the Harm Assessment Risk Tool, which was used by Durham Police until 2021, and which predicted reoffending risks partly based on an individual’s postcode in order to inform charging decisions. All these cases illustrate how ADM can perpetuate discrimination. The Horizon saga illustrates how difficult it is to secure proper redress once the computer says no.

There is no doubt that our new Government are enthusiastic about the adoption of AI in the public sector. Both the DSIT Secretary of State and Feryal Clark, the AI Minister, are on the record about the adoption of AI in public services. They have ambitious plans to use AI and other technologies to transform public service delivery. Peter Kyle has said:

“We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services”,

and

“bringing together digital, data and technology experts from across Government under one roof, my Department will drive forward the transformation of the state”.—[Official Report, Commons, 2/9/24; col. 89.]

Feryal Clarke has emphasised the Administration’s desire to “completely transform digital Government” with DSIT. As the Government continue to adopt AI technologies, it is crucial to balance the potential benefits with the need for responsible and ethical implementation to ensure fairness, transparency and public trust.

The Ada Lovelace Institute warns of the unintended consequences of AI in the public sector, including the risk of entrenching existing practices, instead of fostering innovation and systemic solutions. As it says, the safeguards around automated decision-making, which exist only in data protection law, are therefore more critical than ever in ensuring people understand when a significant decision about them is being automated, why that decision is made, and have routes to challenge it, or ask for it to be decided by a human.

Our citizens need greater, not less, protection, but rather than accepting the need for these, we see the Government following in the footsteps of their predecessor by watering down such rights as there are under GDPR Article 22 not to be subject to automated decision-making. We will, of course, be discussing these aspects of the Data (Use and Access) Bill in Committee next week.

ADM safeguards are critical to public trust in AI, but progress has been glacial. Take the Algorithmic Transparency Recording Standard, which was created in 2022 and is intended to offer a consistent framework for public bodies to publish details of the algorithms used in making these decisions. Six records were published at launch, and only three more seem to have been published since then. The previous Government announced earlier this year that the implementation of the Algorithmic Transparency Recording Standard will be mandatory for departments. Minister Clark in the new Government has said,

“multiple records are expected to be published soon”,

but when will this be consistent across government departments? What teeth do the Central Digital and Data Office and the Responsible Technology Adoption Unit, now both within DSIT, have to ensure the adoption of the standard, especially in view of the planned watering down of the Article 22 GDPR safeguards? Where is the promised repository for ATRS records? What about the other public services in local government too?

The Public Law Project, which maintains a register called Tracking Automated Government, believes that in October last year there were more than 55 examples of public ADM systems use. Where is the transparency on those? The fact is that the Government’s Algorithmic Transparency Recording Standard, while a step in the right direction, remains voluntary and lacks comprehensive adoption or indeed a compliance mechanism or opportunity for redress. The current regulatory landscape is clearly inadequate to address these challenges. Despite the existing guidance and framework, there is no legally enforceable obligation on public authorities to be transparent about their use of ADM and algorithmic systems, or to rigorously assess their impact.

To address these challenges, several measures are needed. We need to see the creation of and adherence to ethical guidelines and accountability mechanisms for AI implementation; a clear regulatory framework and standards for use in the public sector; increased transparency and explainability of the adoption and use of AI systems; investment in AI education; and workforce development for public sector employees. We also need to see the right of redress, with a strengthened right for the individuals to challenge automated decisions.

My Bill aims to establish a clear mandatory framework for the responsible use of algorithmic and automated decision-making systems in the public sector. It will help to prevent the embedding of bias and discrimination in administrative decision-making, protect individual rights and foster public trust in government use of new technologies.

I will not adumbrate all the elements of the Bill. In an era when AI and algorithmic systems are becoming increasingly central to government ambitions for greater productivity and public service delivery, this Bill, I hope noble Lords agree, is crucial to ensuring that the benefits of these technologies are realised while safeguarding democratic values and individual rights. By ensuring that ADM systems are used responsibly and ethically, the Bill facilitates their role in improving public service delivery, making government operations more efficient and responsive.

The Bill is not merely a response to past failures but a proactive measure to guide the future use of technology within government and empower our citizens in the face of these powerful new technologies. I hope that the House and the Government will agree that this is the way forward.

]]>
Lords Debate Regulators : Who Watches the Watchdogs? https://www.lordclementjones.org/2024/09/21/lords-debate-who-watches-the-watchdogs/ Sat, 21 Sep 2024 10:55:23 +0000 https://www.lordclementjones.org/?p=76702 Recently the Lords held a debate on the report of the Industry and Regulators Select Committee (on which I sit) entitled “Who Watches the Watchdogs” about the scrutiny given to the performance independence and competence of our regulators

This is what I said. It was an opportunity as ever to emphasize that Regulation is not the enemy of innovation, or indeed growth, but can in fact, by providing certainty of standards, be the platform for it.

The Grenfell report and today’s Statement have been an extremely sobering reminder of the importance of effective regulation and the effective oversight of regulators. The principal job of regulation is to ensure societal safety and benefit—in essence, mitigating risk. In that context, the performance of the UK regulators, as well as the nature of regulation, is crucial.

In the early part of this year, the spotlight was on regulation and the effectiveness of our regulators. Our report was followed by a major contribution to the debate from the Institute for Government. We then had the Government’s own White Paper, Smarter Regulation, which seemed designed principally to take the growth duty established in 2015 even further with a more permissive approach to risk and a “service mindset”, and risked creating less clarity with yet another set of regulatory principles going beyond those in the Better Regulation Framework and the Regulators’ Code.

Our report was, however, described as excellent by the Minister for Investment and Regulatory Reform in the Department for Business and Trade under the previous Government, the noble Lord, Lord Johnson of Lainston, whom I am pleased to see taking part in the debate today. I hope that the new Government will agree with that assessment and take our recommendations further forward.

Both we and the Institute for Government identified a worrying lack of scrutiny of our regulators—indeed, a worrying lack of even identifying who our regulators are. The NAO puts the number of regulators at around 90 and the Institute for Government at 116, but some believe that there are as many as 200 that we need to take account of. So it is welcome that the previous Government’s response said that a register of regulators, detailing all UK regulators, their roles, duties and sponsor departments, was in the offing. Is this ready to be launched?

The crux of our report was to address performance, strategic independence and oversight of UK regulators. In exploring existing oversight, accountability measures and the effectiveness of parliamentary oversight, it was clear that we needed to improve self-reporting by regulators. However, a growth duty performance framework, as proposed in the White Paper, does not fit the bill.

Regulators should also be subject to regular performance evaluations, as we recommended; these reviews should be made public to ensure transparency and accountability. To ensure that these are effective, we recommended, as the noble Lord, Lord Hollick mentioned, establishing a new office for regulatory performance—an independent statutory body analogous to the National Audit Office—to undertake regular performance reviews of regulators and to report to Parliament. It was good to see that, similar to our proposal, the Institute for Government called for a regulatory oversight support unit in its subsequent report, Parliament and Regulators.

As regards independence, we had concerns about the potential politicisation of regulatory appointments. Appointment processes for regulators should be transparent and merit-based, with greater parliamentary scrutiny to avoid politicisation. Although strategic guidance from the Government is necessary, it should not compromise the operational independence of regulators.

What is the new Government’s approach to this? Labour’s general election manifesto emphasised fostering innovation and improving regulation to support economic growth, with a key proposal to establish a regulatory innovation office in order to streamline regulatory processes for new technologies and set targets for tech regulators. I hope that that does not take us down the same trajectory as the previous Government. Regulation is not the enemy of innovation, or indeed growth, but can in fact, by providing certainty of standards, be the platform for it.

At the time of our report, the IfG rightly said:

“It would be a mistake for the committee to consider its work complete … new members can build on its agenda in their future work, including by fleshing out its proposals for how ‘Ofreg’ would work in practice”.

We should take that to heart. There is still a great deal of work to do to make sure that our regulators are clearly independent of government, are able to work effectively, and are properly resourced and scrutinised. I hope that the new Government will engage closely with the committee in their work.

]]>
Lord C-J Commentary on the new Government’s Science and Technology Programme https://www.lordclementjones.org/2024/07/24/lord-c-j-commentary-on-the-new-governments-science-and-technology-programme/ Wed, 24 Jul 2024 09:06:17 +0000 https://www.lordclementjones.org/?p=76679 Sadly we only had 5 minutes speaking time in the recent Kings Speech debate . Here is an an extended version of my speech which goes into greater depth as to what I believe the Government should be doing in this area if it is to fulfill its growth through innovation agenda and expresses some caveats about how they plan to do this.

When we debated the New government’s proposals in the Kings speech recently the House of Lords  gave  a particularly warm welcome to Lord Vallance of Balham-formerly Sir Patrick Vallance-  as the new Minister of State in the Department.  While the Government’s Chief Scientific Adviser we know from the book “the Long Shot” how he played an  critical role in the establishment of the UK Vaccine Taskforce, which was set up in April 2020 in response to the COVID-19 pandemic. He was pivotal in the recruitment of Dame Kate Bingham to chair the Vaccine Taskforce and in organizing the overall strategy for the UK development and distribution of COVID-19 vaccines. For that we should be eternally grateful. 

 I welcome the Government’s growth through innovation agenda and mission to enhance  public services through the deployment of new technology and also the  concentration of digital functions in DSIT  and that it will become  the centre for “digital expertise and delivery in government,improving how the government and public services interact with citizens.”  in the words of the new Secretary  of State, Peter Kyle. 

The Government is expanding the department’s scope and size by bringing in experts in data, digital, and AI from the Government Digital Service, the Incubator for AI (i.AI), and the Central Digital Data Office to unite efforts to implement digital transformation of public services under one roof.  There is great potential in justice, education, healthcare to name but three areas. 

This is crucial particularly in the adoption of  innovative technologies and tools in our healthcare for which Liberal Democrats believe there should be ring-fenced budgets. We need to be ensuring interoperability of IT systems too.

They government have committed too to modernising public sector procurement frameworks to enable start-ups and SMEs to drive public sector innovation and better public services. Will , however, clear, transparent framework of standards incorporating ethical principles be established? Public sector adoption is very desirable but requires trust on the part of the public/ and the citizen For instance we need to ensure that citizens can assert their rights when faced with automated decision making or live facial recognition

It has felt, under the previous regime, that universities have been under continual threat from government rather than valued as the engines of knowledge and growth and we need to be far more internationally outward looking, in particular fixing our relationship with the EU- using science and technology to address societal challenges for a more resilient and prosperous future in the words of the Royal Society.  

I welcome the new Industrial Strategy Council. Does this mean we can plan for 10 years of stability and opportunity creation in science and tech sector? Successive policy changes to the R&D tax regime over the past several years have created uncertainty and additional red-tape for SMEs, putting at risk the UK’s reputation as a location for innovative businesses.We need to give businesses certainty and incentivise them to invest in new technologies to grow the economy,  create good jobs and tackle the climate crisis. 

Opening up what can be a blocked  pipeline all the way from R & D to commercialisation, from university spinout through start up to scale up and IPO, and crowding in and derisking private investment through the National Wealth Fund, the British Business Bank  and post Mansion House pension reforms, are crucial with all the local, regional, national and UK wide aspects, recognizing the importance of innovation clusters and centres of excellence. We need to tackle regional disparities and develop the innovation clusters with greater devolution to combined authorities

Digital Skills and Digital literacy are also crucial but to deploy digital tools successfully we also need a  pipeline of creative collaborative and critical thinking skills. A massive skills and upskilling agenda is needed in the face of technology advances. The focus in training should be on lifelong skills grants, reforming the apprenticeship levy, and boosting vocational training and apprenticeships and many of the governments proposals in this respect are welcome. 

In this context, as the the chair of a university governing council I very much welcome the Government’s new tone on the value of universities, of long term settlements,  and of resetting relations with Europe and international research collaboration.

The role of university research and spinouts is crucial . The Research Excellence Framework has the perverse incentive of discouraging cooperation. We should be encouraging strategic partnerships in research especially internationally. We need to be full throated members of Horizon -the uncertainty has been extremely damaging to collaboration. I hope the government will now  commit to joining the European Innovation Council as well

Last year Labour set out its plan for the life sciences.It committed to the investment of £10bn into R&D. Further, the plan said that Labour would see the creation of 100,000 jobs in the life sciences sector by 2030. The document contains a range of further welcome pledges including strengthening the Office for Life Sciences and the Life Sciences Council, and  to bring laboratory clusters within the scope of the ‘Nationally significant infrastructure regime’ in England.

We need to ensure Government spending on R&D keeps pace with other nations, and establish a long-term strategy for science, research and innovation that commands cross party support.Research, development and innovation are crucial to driving productivity growth, yet our current levels of R&D investment and productivity lag the G7. I hope this means that we will soon see whether spending plans for government  R & D expenditure by 2030 and 2035 match their words. 

And disproportionately high overseas researcher visa costs  MUST be lowered as Lord Vallance recommended in his Digital Technology Review.  UK visa costs are up to 17 times higher than other leading science nations.The Royal Society have called this a  “punitive tax on talent”. 

But support for innovation should not be unconditional or at any cost. I hope this government will not fall into the trap of viewing regulation as necessarily the enemy of innovation. We need guardrails to ensure that, for example, AI adoption leads to public benefit.

I hope therefore that the reference to AI regulation in the King’s Speech, but failure to announce a bill, is only a timing issue. What IS the Government’s intention especially given an AI  bill was heavily trailed in the media?  

With AI technologies continuing to develop at an exponential rate, clarity on regulation is needed by developers and adopters.There is the question too as to what extent the new government will depart from the current sectoral approach to regulating AI and adopt a cross-sectoral approach. What does the King’s Speech reference to regulating “the most powerful artificial intelligence models” actually refer to? Will the government be launching yet another consultation on AI regulation?

There is no doubt we need to seize the opportunities of AI,  whilst making sure we mitigate the risks of AI, ensuring ethical standards for AI development and use are adopted.

 Liberal Democrats believe we need to create a clear, workable and well-resourced cross-sectoral regulatory framework for artificial intelligence that:

  • Promotes innovation while creating certainty for AI users, developers and investors.
  •  Establishes transparency and accountability for AI systems in the public sector.
  •  Ensures the use of personal data and AI is unbiased, transparent and  accurate, and respects the privacy of innocent people

The government in particular should lead the way in ensuring that there is a high level of transparency and opportunity for redress when Algorithmic and automated systems are used in government. I commend my new private members bill (the Public Authority Algorithmic and Automated Decision-Making Systems Bill) to it! 

The government should also negotiate the UK’s participation in the Trade and Technology Council with the US  and the EU, so we can play a leading role in global AI regulation, and we should work with international partners in agreeing common global standards for AI risk and impact assessment, testing, training monitoring and audit. 

As regards AI regulation in  the Kings Speech itself we are promised  a Product Safety and  Metrology bill which could require alignment of AI driven products with the EU AI Act which seems to be putting the cart well in front of the AI regulatory horse. 

We do need however to ensure that high risk systems are mandated to adopt international ethical and safety standards.At the same time in In this age of IOT we should require all  suppliers to provide a short, clear version of their terms and conditions, setting  out the key facts as they relate to individuals’ data and privacy.

As regards the creative industries there are clearly great opportunities in relation to the use of AI but there are also challenges and big questions over authorship and intellectual property and many artists feel threatened-the root cause of the recent Hollywood writers and actors strike. What is the government’s approach?

We need to establish very clearly that Generative AI systems need a licence to ingest copyright material for training purposes-just as Mumsnet and the New York Times are asserting- and that there is an obligation of transparency in the use of data sets and original  content.

Lord Vallance is on record as wanting certainty in the relationship between IP rights and generative AI for innovator and investor confidence. And this should be the case for for creatives too. Copyright content needs to be properly remunerated by the tech platforms. The bill needs to make clear that platforms profit from content and need to pay properly and fairly, on benchmarked terms and with reference to value for end users when content is use for training Large Language Models.

And when will the government  set up the promised new Regulatory Innovation Office? This was promised as an organisation to help “regulators to update regulation, speed up approval timelines and co-ordinate issues that span existing boundaries”. and as a “pro-innovation body” designed to “set targets for tech regulators, end uncertainty for businesses, turbocharge output, and boost economic growth”. We need in particular to know whether it will replace the Digital Regulators Cooperation Forum.

We must also ensure we have the right climate for FDI. The Harrington Report called for a new Business investment Strategy for the Office for Investment. Despite the previous government’s Life  Sciences  Vision we have seen pharma company Eli Lilley pulling investment on laboratory space in London because the UK “does not invite inward investment at this time”.  Astra Zeneca decided to build its next plant in Ireland  because of the U.K.’s “discouraging” tax rate. 

We also need to modernise employment rights to make them fit for the age of the gig economy,including by establishing a new ‘dependent contractor’ employment status in between  employment and self-employment, with entitlements to basic rights such as  minimum earnings levels, sick pay and holiday entitlement.

There is a great need for need for greater  diversity and inclusion in the AI workforce and science and technology more broadly. Only one in four senior tech employees in the UK are women, and only 14% from ethnic minorities. 

I hope the Government too is fully committed despite its growth agenda to a full hearted support for the Competition and Markets Authority in the use of its powers under the new Digital Markets Act. I welcome the CMA’s market investigation into Cloud Services and its reassurance that it is looking broadly at the anti-competitive practices of the service providers such as vendor lock-in tactics and non-competitive procurement. 

Then again how will the government kickstart better progress on Project Gigabit? Given the competitive model for rollout of broadband services that has been chosen, investors in alternative providers to the incumbents need reassurance that their investment is going onto a level playing field and not one tilted in favour of the incumbents. 

Also in terms of vital cross departmental working, joining up government on Science and Technology policy we need to know what  the role will be of the National Science and Technology Council and what are its key priorities.

There no mention in Labour’s manifesto on the potential impact of AI on the  workplace.The TUC and Institute for the Future of Work are among those who have called for new legislation to create further legal protections for workers and employers in relation to the use of AI. The government should introduce safeguards against the invasion of privacy through surveillance technology and discriminatory algorithmic decision-making in the workplace along the lines of the TUC draft bill and algorithmic impact assessment along the lines of IFOW’s proposals. 

The Government’s will also need to decide how to follow up on the recommendations of recent key Reports such as

  • Professor Dame Angela McLean’s Review of Life Sciences
  • The Vallance Review of Pro-innovation Regulation of Digital Technologies
  • The Independent Review of Research Bureaucracy by Professor Adam Tickell
  • The Independent Review of the UKRI by Sir David Grant
  • The Independent Review of the UK’s Research, Development and Innovation Landscape by Sir Paul Nurse
  • The O’Shaughnessy Report on Clinical Trials
  • The Independent Review of the Future of Compute by Professor Zoubin Ghahramani FRS and 
  • The Independent Review of University Spin-out Companies by Professor Irene Tracey and Dr. Andrew Williamson

More broadly it will need to set out its  approach to the science and technology framework for DSIT set out by the previous government in 2023 with its 10 priority areas  Will this be revised? If so they need to set measurable targets and key outcomes in the priority areas. The  government will also  need to take a clear view on  the key technologies we should be assisting in developing and commercialising 

Then there are the pre existing financial commitments in the science and technology field. The Chancellor has said she will be checking all the previous government’s commitments for affordability. Which of  the previous Government’s financial commitments will she confirm? For instance 

The  £7.4 million upskilling fund pilot to help SMEs develop AI skills.

Investing up to £100 million in the Alan Turing Institute over the next five years (up from £50 million)

The £100 million investment by the British Business Bank into ICG,in respect of  the Long-term Investment for Technology and Science (LIFTS) initiative

The £1.1 billion funding for 65 Centres for Doctoral Training (CDTs) through the Engineering and Physical Sciences Research Council (EPSRC), covering key technologies like AI and engineering biology

As regards the bills in the Kings Speech I look forward to seeing the details but the Digital Information and Smart Data bill does seem to be heading in the right direction in the areas being reinstated. The retention and enhancement of public trust in data use and sharing is the overriding need so that  the potential of data can be unleashed through better trusted sharing of data.  It is really important that we do more to educate the public about how and where our data is used and what powers individuals have to find out this information

 I hope other than a few clarifications, especially in the research area, and in terms of the constitution of the ICO  we are not going exhume some of the worst areas  of the old DPDI bill and we have ditched the idea of a Brexit EU divergence Dividend by the watering down of so many data subject rights.

Will the Government give a firm commitment to safeguard our data adequacy with the EU? Will the bill  introduce the promised  ban on the creation of sexually explicit deepfakes?

I also hope that the Government will confirm that the intent of the reinstated Digital Verification provisions is not compulsory national Digital ID but the creation of a market in digital ID providers that give choice to the citizen.

Given that LinesearchbeforeUdig, or LSBUD is claimed to already achieve the aims of NAUR, to be more widely used than the National Underground Assert Register NUAR and be more cost-effective, I hope also that Ministers will meet LSBUD and provide us all with much greater clarity around these proposals. 

I hope that we can include other positive spects of the late unlamented DPDI Bill  in the bill: More action on online fraud, digital identity theft, deepfakes in elections Misinformation and disinformation, misogyny as a hate crime, there is quite a list of possibilities. Together with new models of personal data control which were advocated as long ago as 2017 with the Hall Pesenti review, especially through new data communities and institutions and an enhanced ability to exercise our right to data portability, especially in real-time and more regulatory oversight over use of biometrics and biometric technologies. 

 I of course welcome the pledge to give coroners more  powers to access information held by technology companies after a child’s death AND to banning the creation of sexually explicit deepfakes.

As regards the Cyber Security and Resilience Bill, events of recent days have made it clear we are not just talking about threats from bad actors. It reminds us how dependent we are on just a few overly dominant major tech companies. With Microsoft and AWS enjoying a combined UK market share of around 70-90%, according to the Competition and Markets Authority’s own research, the lack of competition presents a serious concerns for our nation’s security and resilience. There needs to to be a rethink on critical national infrastructure such as cloud services and business software which are now essential public utilities and also how we are wholesale replacing reliable analogue communication with digital systems without backup. 

In the bill I hope will we see the long awaited amendment of the Computer Misuse Act to include a statutory public interest defence, as called for by Cyber Up, to allow white hat research into computer systems as the Vallance report recommended.  The rules for computer evidence must be changed too. We must have no more Horizon scandals!

 

]]>
Lords Committee Highly Critical of Office for Students https://www.lordclementjones.org/2024/06/02/lords-committee-highly-critical-of-office-for-students/ Sun, 02 Jun 2024 17:42:22 +0000 https://www.lordclementjones.org/?p=76649
]]>
Data Protection and Digital Information Bill lost in wash up-Hurray! https://www.lordclementjones.org/2024/06/02/data-protection-and-dighital-information-bill-lost-in-wash-up/ Sun, 02 Jun 2024 17:33:05 +0000 https://www.lordclementjones.org/?p=76635
]]>
We Need a New Offence of Digital ID Theft https://www.lordclementjones.org/2024/04/20/we-need-a-new-offence-of-digital-id-theft/ Sat, 20 Apr 2024 16:40:03 +0000 https://www.lordclementjones.org/?p=76613 As part of the debates on the Data Protection Bill I recently advocated for a new Digital ID theft offence . This is what i said.

It strikes me as rather extraordinary that we do not have an identity theft offence. This is the Metropolitan Police guidance for the public:

“Your identity is one of your most valuable assets. If your identity is stolen, you can lose money and may find it difficult to get loans, credit cards or a mortgage. Your name, address and date of birth provide enough information to create another ‘you’”.

It could not be clearer. It goes on:

“An identity thief can use a number of methods to find out your personal information and will then use it to open bank accounts, take out credit cards and apply for state benefits in your name”.

It then talks about the signs that you should look out for, saying:

“There are a number of signs to look out for that may mean you are or may become a victim of identity theft … If you think you are a victim of identity theft or fraud, act quickly to ensure you are not liable for any financial losses … Contact CIFAS (the UK’s Fraud Prevention Service) to apply for protective registration”.

However, there is no criminal offence.

Interestingly enough, I mentioned this to the noble Baroness, Lady Morgan;  Back in October 2022, her committee—the Fraud Act 2006 and Digital Fraud Committee—produced a really good report, Fighting Fraud: Breaking the Chain, which said:

“Identity theft is often a predicate action to the criminal offence of fraud, as well as other offences including organised crime and terrorism, but it is not a criminal offence. Cifas datashows that cases of identity fraud increased by 22% in 2021, accounting for 63% of all cases recorded to Cifas’ National Fraud Database”.

It goes on to talk about identity theft to some good effect but states:

“In February 2022, the Government confirmed that there were no plans to introduce a new criminal offence of identity theft as ‘existing legislation is in place to protect people’s personal data and prosecute those that commit crimes enabled by identity theft’”.

I do not think the committee agreed with that at all. It said:

“The Government should consult on the introduction of legislation to create a specific criminal offence of identity theft. Alternatively, the Sentencing Council should consider including identity theft as a serious aggravating factor in cases of fraud”.

The Government are certainly at odds with the Select Committee chaired by the noble Baroness, Lady Morgan. I am indebted to a creative performer called Bennett Arron, who raised this with me some years ago. He related with some pain how he took months to get back his digital identity. He said: “I eventually, on my own, tracked down the thief and gave his name and address to the police. Nothing was done. One of the reasons the police did nothing was because they didn’t know how to charge him with what he had done to me”. That is not a good state of affairs. Then we heard from Paul Davis, the head of fraud prevention at TSB. The headline of the piece in the Sunday Times was: “I’m head of fraud at a bank and my identity was still stolen”. He is top dog in this area, and he has been the subject of identity theft.

This seems an extraordinary situation, whereby the Government are sitting on their hands. There is a clear issue with identity theft, yet they are refusing—they have gone into print, in response to the committee chaired by the noble Baroness, Lady Morgan—and saying, “No, no, we don’t need anything like that; everything is absolutely fine”. I hope that the Minister can give a better answer this time around.

]]>