AI technology urgently needs proper regulation beyond a voluntary ethics code

Lord C-J House Magazine February 2020

We already have the most comprehensive CCTV coverage in the Western world, add artificial intelligence driven live facial recognition and you have all the makings of a surveillance state, writes Lord Clement-Jones.

In recent months live facial recognition technology has been much in the news.

Despite having been described by as ‘potentially Orwellian’ by the Metropolitan Police Commissioner, and ‘deeply concerning’ by the Information Commissioner the Met have now announced its widespread adoption.

The Ada Lovelace Institute in Beyond Face Value reported similar concerns.

The Information Commissioner has been consistent in her call for a statutory code of practice to be in place before facial-recognition technology can be safely deployed by police forces saying; “Never before have we seen technologies with the potential for such widespread invasiveness...The absence of a statutory code that speaks to the challenges posed by LFR will increase the likelihood of legal failures and undermine public confidence.”

Met Police Officers 'Did Not Act Inappropriately” At Sarah Everard Vigil, Report Finds

Met Police Officers "Did Not Act Inappropriately” At Sarah Everard Vigil, Report FindsBy Alain Tolhurst30 Mar

I and my fellow Liberal Democrats share these concerns. We already have the most comprehensive CCTV coverage in the western world. Add to that artificial intelligence driven live facial recognition and you have all the makings of a surveillance state.

The University of Essex in its independent report last year demonstrated the inaccuracy of the technology being used by the Met. Analysis of six trials found that the technology mistakenly identified innocent people as “wanted” in 80 per cent of cases.

Even the Home Office’s own Biometrics and Forensics Ethics Group has questioned the accuracy of live facial recognition technology and noted its potential for biased outputs and biased decision-making on the part of system operators.

As a result, the Science and Technology Select Committee last year recommended an immediate moratorium on its use until concerns over the technology’s effectiveness and potential have been fully resolved.

To make matters worse in a recent parliamentary question, Baroness Williams of Trafford outlined the types of people who can be included on a watch list through this technology. They are persons wanted on warrants, individuals who are unlawfully at large, persons suspected of having committed crimes, persons who might be in need of protection, individuals whose presence at an event causes particular concern, and vulnerable persons.

It is chilling that not only is this technology in place and being used but that the government has arbitrarily already decided who it is legitimate to use the technology on.

A moratorium is therefore a vital first step. We need to put a stop to this unregulated invasion of our privacy and have a careful review.

I have now tabled a private members bill which first legislates for a moratorium and then institutes a review of the use of the technology which would have as minimum terms of reference: the equality and human rights implications of the use of automated facial recognition technology; the data protection implications of the use of that technology; the quality and accuracy of the technology; the adequacy of the regulatory framework governing how data is or would be processed and shared between entities involved in the use of facial recognition; and recommendations for addressing issues identified by the review.

At that point we can debate if or when it’s use is appropriate and whether and how to regulate its use. This might be absolute restriction or permitting certain uses where regulation to ensure privacy safeguards are in place, together with full impact assessment and audit.

The Lords AI Select Committee I chaired recommended the adoption of a set of ethics around the development of AI applications believing that in the main voluntary compliance was the way forward. But certain technologies need proper regulation now, beyond a voluntary ethics code. This is one such example and it is urgent.

Lord Clement-Jones is a Liberal Democrat Member of the House of Lords and Liberal Democract Lords Spokesperson for Digital. 

https://www.politicshome.com/thehouse/article/ai-technology-urgently-needs-proper-regulation-beyond-a-voluntary-ethics-code


No room for government complacency on artificial intelligence, says new Lords report December 2020

Friday 18 December 2020

The Government needs to better coordinate its artificial intelligence (AI) policy and the use of data and technology by national and local government.

  • The increase in reliance on technology caused by the COVID-19 pandemic, has highlighted the opportunities and risks associated with the use of technology, and in particular, data. Active steps must be taken by the Government to explain to the general public the use of their personal data by AI.
  • The Government must take immediate steps to appoint a Chief Data Officer, whose responsibilities should include acting as a champion for the opportunities presented by AI in the public service, and ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.
  • A problem remains with the general digital skills base in the UK. Around 10 per cent of UK adults were non-internet users in 2018. The Government should takes steps to ensure that the digital skills of the UK are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.
  • AI will become embedded in everything we do. It will not necessarily make huge numbers of people redundant, but when the COVID-19 pandemic recedes and the Government has to address the economic impact of it, the nature of work will change and there will be a need for different jobs and skills. This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. In particular the AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific national training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
  • The Centre for Data Ethics and Innovation (CDEI) should establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses.
  • For its part, the Information Commissioner’s Office (ICO) must develop a training course for use by regulators to give their staff a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. Such training should be prepared with input from the CDEI, the Government’s Office for AI and Alan Turing Institute.
  • The Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.
  • The UK remains an attractive place to learn, develop, and deploy AI. The Government must ensure that changes to the immigration rules must promote rather than obstruct the study, research and development of AI.

There is also now a clear consensus that ethical AI is the only sustainable way forward. The time has come for the Government to move from deciding what the ethics are, to how to instil them in the development and deployment of AI systems.

These are the main conclusions of the House of Lords Liaison Committee’s report, AI in the UK: No Room for Complacency, published today, 18 December.

This report examines the progress made by the Government in the implementation of the recommendations made by the Select Committee on Artificial Intelligence in its 2018 report AI in the UK: ready, willing and able?

Lord Clement-Jones, who was Chair of the Select Committee on Artificial Intelligence, said:

“The Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency. There must be more and better coordination, and it must start at the top.

“A Cabinet Committee must be created whose first task should be to commission and approve a five-year strategy for AI. The strategy should prepare society to take advantage of AI rather than be taken advantage of by it.

“The Government must lead the way on making ethical AI a reality. To not do so would be to waste the progress it has made to date, and to squander the opportunities AI presents for everyone in the UK.”

Other findings and conclusions include:

https://www.parliament.uk/business/lords/media-centre/house-of-lords-media-notices/2020/december-2020/no-room-for-government-complacency-on-artificial-intelligence-says-new-lords-report/


Institute for Ethical AI in Education publishes new guidance for procuring AI teaching tools

The culmination of two years work by the Institute .

Lord Tim Clement-Jones, chair of IEAIED and former chair of the House of Lords Select Committee on AI, warned that the unethical use of AI in education could “hamper innovation” by driving a ‘better safe than sorry’ mindset across the sector. “The Ethical Framework for AI in Education overcomes this fundamental risk. It’s now time to innovate”

https://fb77c667c4d6e21c1e06.b-cdn.net/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf


Lord C-J interviewed at "Our People-Centered Digital Future"

Sadly I couldn't join-- Vint Cerf, Sir Tim Berners-Lee, Dame Wendy Hall, and many others at this important event last December but I contributed by video when I described the progress we were making in the UK and our priorities for ethical AI.

https://www.linkedin.com/pulse/our-people-centered-digital-future-vital-questions-david-bray-phd/ ... 


ORBIT Conference 2018 – Building In The Good: Creating Positive ICT Futures

The pervasive nature of information and communications technologies (ICT) in all aspects of our lives raises many exciting possibilities but also numerous concerns. Responsible Research and Innovation aims to maximise the benefits of technology whilst minimising the risks.Read more


CXO Talk 2018 - Public Policy: AI Risks and Opportunities

The power of artificial intelligence creates opportunities and risks that public policy must eventually address. Industry analyst and CXOTalk host, Michael Krigsman, speaks with two experts to explore the UK Parliament's House of Lords AI report.Read more


Lord C-J launches Select Committee report on AI :"We Need an Ethical Framework"

Recently I helped to launch the report of the Select Committee Report on AI which I chaired. This is a piece I recently wrote about the Report and its implications.

Barely a day goes by without a piece in the media on a new aspect of AI or Robotics, including in today's Gulf Today I see. Some pessimistic others optimistic.Read more


Lord C-J calls for Ethical framework for AI applications

As Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence I recently gave a Speech at the Berlin AI Expo on why business needs to develop an ethical framework for the use of AI and algorithmns. This is what I said. Read more


New All Party Parliamentary Artificial Intelligence Group Being Formed

Stephen Metcalfe MP, Chair of the Science and Technology Select Committee and I are forming a new All Party Group. We held a very well attended first meeting of the prospective group on 21st November. We are are now formally creating the Group and a full programme of meetings is planned.Read more