What will the year of the Ox bring in UK China Relations

Lord C-J: 2020 has brought huge problems to the fore - particularly the COVID19 virus, by no means over and we hope the WHO mission to Wuhan sheds light on how to prevent further viruses from emerging and spreading. Meanwhile repression in Xinjiang and the National Security Law in Hong Kong have damaged confidence in our bilateral relations.

https://www.youtube.com/watch?v=2lL5y_geeeQ

Saying No to Internal Vaccine Passports

https://bigbrotherwatch.org.uk/2021/04/70-mps-launch-cross-party-campaign-against-covid-passes/

70+ MPS LAUNCH CROSS-PARTY CAMPAIGN AGAINST COVID PASSE

BIG BROTHER WATCH TEAM / APRIL 2, 2021

Further to Sir Keir’s comments about vaccine passports being un-British and against the “British instinct”, over 70 MPs have launched a cross-party campaign opposing their “divisive and discriminatory use”.

MPs and peers from Labour, the Liberal Democrats and Conservative parties have signed a pledge to oppose the move.

Signatories include former Labour leader, Jeremy Corbyn, Labour MPs Dawn Butler and Rebecca Long Bailey, former director of Liberty, Baroness Shami Chakrabarti, and over 40 MPs from the Conservative Covid Recovery Group.

They have been supported by campaign groups Big Brother Watch, Liberty, the Joint Council for the Welfare of Immigrants (JCWI) and Privacy International.

The pledge states:

“We oppose the divisive and discriminatory use of COVID status certification to deny individuals access to general services, businesses or jobs.”

QUOTES

Baroness Chakrabarti said:

“International travel is a luxury but participating in your own community is a fundamental right. So internal Covid passports are an authoritarian step too far. We don’t defeat the virus with discrimination and oppression but with education, vaccination and mutual support.”

Leader of the Liberal Democrats, Ed Davey MP said:

“As we start to get this virus properly under control we should start getting our freedoms back, vaccine passports – essentially Covid ID cards – take us in the other direction.

“Liberal Democrats have always been the party for civil liberties, we were against ID cards when Blair tried to introduce them and we are against them now.

“I’m pleased Big Brother Watch is helping drive forward a growing consensus against Covid ID cards in our politics. Now I hope we can start to turn the tide on the creeping authoritarianism we are seeing from Number 10 across a broad range of issues.”

Sir Graham Brady MP said:

“Covid-Status Certification would be divisive and discriminatory. With high levels of vaccination protecting the vulnerable and making transmission less likely, we should aim to return to normal life, not to put permanent restrictions in place.”

Silkie Carlo, director of Big Brother Watch said:

“Our common goal is to emerge from lockdown – healthy, safe and free. But we won’t arrive at freedom through exclusion. Covid passes would be the first attempt at segregation in Britain for many decades, dividing communities without reducing the risks. We are in real danger of becoming a check-point society where anyone from bouncers to bosses could demand to see our papers. We cannot let this Government create a two-tier nation of division, discrimination and injustice.”

Minnie Rahman, Campaigns Director at JCWI, said:

“The Hostile Environment, which is built on identity checks, has already been proven to cause discrimination against migrants and people of colour. On top of this, migrant communities face significant barriers to accessing the vaccine. Any recovery plan which risks increasing racial discrimination and purposefully leaves people behind is doomed to fail.”

Sam Grant, Head of Policy and Campaigns at Liberty, said:

“We all want to get out of the pandemic as quickly as possible, but we need to do so in a way that ensures we don’t enter a ‘new normal’ which diminishes the rights and liberties we took for granted before the COVID crisis.

“Any passport system has the potential to create a two-tier society, and risk further marginalising people who are already discriminated against and cut off from vital services. Vaccine passports would allow ID systems by stealth, entrenching inequality and division.

“We need strategies that support and enable people to follow public health guidance, including vaccination, alongside more support for the most marginalised who have suffered the sharp edge of the Government’s focus on criminal justice over public health. We won’t get out of this pandemic by entrenching inequality, but only by protecting everyone.”

ENDS

Full list of signatories:

Labour Party

Diane Abbot MP
Bell Ribeiro-Addy MP
Tahir Ali MP
Rebecca Long Bailey MP
Clive Lewis MP
Beth Winter MP
Rachel Hopkins MP
Apsana Begum MP
Richard Burgon MP
Ian Byrne MP
Dawn Butler MP
Jeremy Corbyn MP
Mary Kelly Foy MP
Ian Lavery MP
Ian Mearns MP
John McDonnell MP
Grahame Morris MP
Kate Osborne MP
Zarah Sultana MP
Claudia Webbe MP
Mick Whitley MP
Nadia Whittome MP
Baroness Chakrabarti
Baroness Bryan of Partick
Lord Woodley
Lord Sikka
Lord Hendy

Liberal Democrats

Ed Davey MP
Layla Moran MP
Munira Wilson MP
Alistair Carmichael MP
Daisy Cooper MP
Wendy Chamberlain MP
Sarah Olney MP
Christine Jardine MP
Jamie Stone MP
Tim Farron MP
Lord Scriven
Lord Strasburger
Lord Tyler
Lord Clement-Jones

Conservative Party

Mark Harper MP
Steve Baker MP
Sir Iain Duncan Smith MP
Harriett Baldwin MP
Esther McVey MP
Adam Afriyie MP
Bob Blackman MP
Sir Graham Brady MP
Nus Ghani MP
Andrew Mitchell MP
Peter Bone MP
Ben Bradley MP
Andrew Bridgen MP
Paul Bristow MP
Philip Davies MP
Richard Drax MP
Jonathan Djanogly MP
Chris Green MP
Philip Hollobone MP
Adam Holloway MP
David Jones MP
Simon Jupp MP
Andrew Lewer MBE MP
Julian Lewis MP
Karl McCartney MP
Craig Mackinlay MP
Anthony Mangnall MP
Stephen McPartland MP
Anne Marie Morris MP
Sir John Redwood MP
Andrew Rosindell MP
Greg Smith MP
Henry Smith MP
Julian Sturdy MP
Sir Desmond Swayne MP
Sir Robert Syms MP
Craig Tracey MP
Jamie Wallis MP
David Warburton MP
William Wragg MP
Sir Charles Walker MP

NGOs/Businesses

Big Brother Watch
Liberty
Migrants Organise
Joint Council for the Welfare of Immigrants
medConfidential
Privacy International


Lord C-J Questions Touring Negotiation Failure

I questioned the Government on its total failure to negotiate a deal with the EU. It is clear the Home Office refused to grant EU citizens 90 day Permitted Paid Engagement. Touring musicians and creative artists have just been sacrificed on the altar of Tory immigration policy

My Lords, touring musicians and creative artists are deeply angry at this negotiating failure. Is not the root of the problem refusal by the Home Office to extend permitted paid engagement here to 90 days for EU artists, meaning as a result that work permits will now be required in many member states for our artists? Will the Government urgently rethink this and renegotiate on the instrument and equipment carnet and on trucking issues?


Online Harms: The Need for Early Legislation

This is what I said when (finally) the Government made its response to the White Paper Consultation in January


My Lords, over three years have elapsed and three Secretaries of State have come and gone since the Green Paper, in the face of a rising tide of online harms, not least during the Covid period, as Ofcom has charted. On these Benches, therefore, we welcome the set of concrete proposals we finally have to tackle online harms through a duty of care. We welcome the proposal for pre-legislative scrutiny, but I hope that there is a clear and early timetable for this to take place.

As regards the ambit of the duty of care, children are of course the first priority in prevention of harm, but it is clear that social media companies have failed to tackle the spread of fake news and misinformation on their platforms. I hope that the eventual definition in the secondary legislation includes a wide range of harmful content such as deep fakes, Holocaust denial and anti-Semitism, and misinformation such as anti-vax and QAnon conspiracy theories.

I am heartened too by the Government’s plans to consider criminalising the encouragement of self-harm. I welcome the commitment to keeping a balance with freedom of expression, but surely the below-the-line exemption proposed should depend on the news publisher being Leveson-compliant in how it is regulated. I think I welcome the way that the major impact of the duty of care will fall on big-tech platforms with the greatest reach, but we on these Benches will want to kick the tyres hard on the definition, threshold and duties of category 2 to make sure that this does not become a licence to propagate serious misinformation by some smaller platforms and networks.

I welcome the confirmation that Ofcom will be the regulator, but the key to success in preventing online harms will be whether Ofcom has teeth. Platforms Toggle showing location ofColumn 1711will need to demonstrate how they have reduced the “reasonably foreseeable” risk of harm occurring from the design of their services. In mitigating the risk of “legal but harmful content”, this comes down to the way in which platforms facilitate and even encourage the sharing of extreme or sensationalist content designed to cause harm. As many excellent bodies such as Reset, Avaaz and Carnegie UK have pointed out—as the noble Lord, Lord Stevenson, said, the latter is the begetter of the duty of care proposal—this means having the power of compulsory audit. Inspection of the algorithms that drive traffic on social media is crucial.

Will Ofcom be able to make a direction to amend a recommender algorithm, how a “like” function operates and how content is promoted? Will it be able to inspect the data by which the algorithm trains and operates? Will Ofcom be able to insist that platforms can establish the identity of a user and address the issue of fake accounts, or that paid content is labelled? Will it be able to require platforms to issue fact-checked corrections to scientifically inaccurate posts? Will Ofcom work hand in hand with the Internet Watch Foundation? International co-ordination will be vital.

Ofcom will also need to work closely with the CMA if the Government are to protect vulnerable victims of online scams, fraud, and fake and misleading online reviews, if they are explicitly excluded from this legislation. Ofcom will need to work with the ASA to regulate harmful online advertising, as well. It will also need to work with the Gambling Commission on the harms of online black-market gambling, as was highlighted yesterday by my noble friend Lord Foster.

How will this new duty of care mesh with compliance with the age-appropriate design code, regulated by the ICO? As the noble Lord, Lord Stevenson, has mentioned, the one major fudge in the response is on age verification. The proposals do not meet the objectives of the original Part 3 of the Digital Economy Act. We were promised action when the response arrived, but we have a much watered-down proposal. Pornography is increasingly available and accessible to young people on more sites than just those with user-generated content. How do the Government propose to tackle this ever more pressing problem? There are many other areas that we will want to examine in the pre-legislative process and when the Bill comes to this House.

As my honourable friend Jamie Stone pointed out in the Commons yesterday, a crucial component of minimising risk online is education. Schools need to educate children about how to use social media responsibly. What commitment do the Government have to online media education? When will the strategy appear and what resources will be devoted to it?

These are some of the yet unanswered questions before the draft legislation arrives, but I hope that the Government commit to a full debate early in the new year so that some of these issues can be unpacked at the same time as the pre-legislative scrutiny process starts.

 


Lord C-J : Give Musicians the Freedom to Tour

At a recent debate colleagues and I heavily criticized the Government’s failure to secure a cultural exemption from cabotage rules in the EU trade negotiation

My Lords, I join with other noble Lords in pointing out that the issues on cabotage are part of a huge cloud now hanging over the creative sector, including the requirement for work permits or visa exemptions in many EU countries, CITES certificates for musical instruments, ATA carnets for all instruments and equipment, and proof of origin requirements for merchandise. Cabotage provisions in the EU-UK Trade and Co-operation Agreement will mean that performers’ European tours will no longer be viable, because the agreement specifies that hauliers will be able to make only two journeys within a trip to the EU. Having to return to the UK between unloading sites in the EU will have a significant negative impact on the UK’s cultural exports and associated jobs.

A successful UK transport industry dedicated to our creative industries is at risk of relocation to the EU, endangering British jobs and jeopardising the attractiveness of the UK as a culture hub, as support industries will follow the companies that relocate to the EU. What proposals do the Government have for a negotiated solution, such as they have heard about today, that will meet their needs?


COVID-19, Artificial Intelligence and Data Governance: A Conversation with Lord Tim Clement-Jones

 

BIICL June 2020

 

https://youtu.be/sABSaAkkyrI

 

This was the first in a series of webinars on 'Artificial Intelligence: Opportunities, Risks, and the Future of Regulation'.

In light of the COVID-19 outbreak, governments are developing tracing applications and using a multitude of data to mitigate the spread of the virus. But the processing, storing, use of personal data and the public health effectiveness of these applications require public trust and a clear and specific regulatory context.

The technical focus in the debate on the design of the applications - centralised v. decentralised, national v. global, and so on - obfuscates ethical, social, and legal scrutiny, in particular against the emerging context of public-private partnerships. Discussants focused on these issues, considering the application of AI and data governance issues against the context of a pandemic, national responses, and the need for international, cross border collaboration.

Lord Clement-Jones CBE led a conversation with leading figures in this field, including:

Professor Lilian Edwards, Newcastle Law School, the inspiration behind the draft Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates;

Carly Kind, Director of The Ada Lovelace Institute, which published the rapid evidence review paper Exit through the App Store? Should the UK Government use technology to transition from the COVID-19 global public health crisis

Professor Peter Fussey, Research Director of Advancing human rights in the age of AI and the digital society at Essex University's Human Rights Centre;

Mark Findlay, Director of the Centre for Artificial Intelligence and Data Governance at Singapore Management University, which has recently published a position paper on Ethics, AI, Mass Data and Pandemic Challenges: Responsible Data Use and Infrastructure Application for Surveillance and Pre-emptive Tracing Post-crisis.

The event was convened by Dr Irene Pietropaoli, Research Fellow in Business & Human Rights, British Institute of International and Comparative Law.

 


Regulating artificial intelligence: Where are we now? Where are we heading?

By Annabel Ashby, Imran Syed & Tim Clement-Jones on March 3, 2021

https://www.technologyslegaledge.com/author/tclementjones/

Hard or soft law?

That the regulation of Artificial intelligence is a hot topic is hardly surprising. AI is being adopted at speed, news reports frequently appear about high-profile AI decision-making, and the sheer volume of guidance and regulatory proposals for interested parties to digest can seem challenging.

Where are we now? What can we expect in terms of future regulation? And what might compliance with “ethical” AI entail?

High-level ethical AI principles were made by the OECD, EU and G20 in 2019. As explained below, great strides were made in 2020 as key bodies worked to capture these principles in proposed new regulation and operational processes. 2021 will undoubtedly keep up this momentum as these initiatives continue their journey into further guidance and some hard law.

In the meantime, with regulation playing catch-up with reality (so often the case where technological innovation is concerned), industry has sought to provide reassurance by developing voluntary codes. While this is helpful and laudable, regulators are taking the view that more consistent, risk-based regulation is preferable to voluntary best practice.

We outline the most significant initiatives below, but first it is worth understanding what regulation might look like for an organisation using AI.

Regulating AI

Of course the devil will be in the detail, but analysis of the most influential papers from around the globe reveals common themes that are the likely pre-cursers of regulation. It reveals that conceptually, the regulation of AI is fairly straightforward and has three key components:

  • setting out the standards to be attained;
  • creating record keeping obligations; and
  • possible certification following audit of those records, which will all be framed by a risk-based approach.

Standards

Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.

Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.

Records

To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.

Certification

A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.

Risk-based approach

Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.

Key initiatives and Proposals

Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:

OECD

The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.

Europe

Several significant proposals were published by key institutions in 2020.

In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.

In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.

Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report  identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.

United Kingdom   

The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.

Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.

The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.

The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.

Regulators

Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.

The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.

Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.

2021 and beyond

In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.Print:EmailTweetLikeLinkedIn


Tackling the algorithm in the public sector

Constitution Society Blog Lord C-J March 2021

Lord Clement-Jones CBE is the House of Lords Liberal Democrat Spokesperson for Digital and former Chair of the House of Lords Select Committee on Artificial Intelligence (2017-2018).

https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/

 

 

Algorithms in the public sector have certainly been much in the news since I raised the subject in a house of Lords debate last February. The use of algorithms in government – and more specifically, algorithmic decision-making – has come under increasing scrutiny. 

The debate has become more intense since the UK government’s disastrous attempt to use an algorithm to determine A-level and GCSE grades in lieu of exams, which had been cancelled due to the pandemic.  This is what the FT had to say last August after the OFQUAL Exam debacle, where students were subjected to what has been described as unfair and unaccountable decision-making over their A-level grades: 

The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure….’

It concluded: ‘Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts…. As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes.’ 

Over the past few years, we have seen a substantial increase in the adoption of algorithmic decision-making and prediction, or ADM, across central and local government. An investigation by the Guardian in late 2019 showed some 140 local authorities out of 408 surveyed, and about a quarter of police authorities, were now using computer algorithms for prediction, risk assessment and assistance in decision-making in areas such as benefit claims, who gets social housing and other issues – despite concerns about their reliability. According to the Guardian, nearly a year later that figure had increased to half of local councils in England, Wales and Scotland; many of them without any public consultation on their use.

Of particular concern are tools such as the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias, discrimination and dubious predictions.

Central government use is even more opaque but we know that HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services. 

A key example of ADM use in central government is the DWP’s much criticised Universal Credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group study ‘The Computer Says No’ shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

The Joint Council for the Welfare of Immigrants (JCWI) and campaigning organisation Foxglove joined forces last year to sue the Home Office over an allegedly discriminatory algorithmic system – the so called ‘streaming tool’ – used to screen migration applications.  This is the first, it seems, successful legal challenge to an algorithmic decision system in the UK, although before having to defend the system in court, the Home Office decided to scrap the algorithm.

The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system two years ago and said in a statement afterwards: ‘Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.’

Overseas the use of algorithms is even more extensive and, it should be said, controversial – particularly in the US. One such system is the NYPD’s Patternizr, a tool that the NYPD has designed to identify potential future patterns of criminal activity. Others include Northpointe’s COMPAS risk assessment programme in Florida and the InterRAI care assessment algorithm in Arkansas.

It’s not that we weren’t warned, most notably in Cathy O’Neil’s Weapons of Math Destruction (2016) and Hannah Fry’s Hello World (2018), of the dangers of replication of historical bias in algorithmic decision making. 

It is clear that failure to properly regulate these systems risks embedding bias and inaccuracy. Even when not relying on ADM alone, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.

Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report ‘AI in the UK: Ready Willing and Able?’ in 2018. We said at the time: ‘We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.’

It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a ‘right to an explanation’ provision, when an individual has been subject to fully automated decision-making. However, few highly significant decisions are fully automated – often, they are used as decision support, for example in detecting child abuse. The law should be expanded to also cover systems where AI is only part of the final decision.

The Science and Technology Select Committee Report ‘Algorithms in Decision-Making’ of May 2018, made extensive recommendations in this respect. It urged the adoption of a legally enforceable ‘right to explanation’ that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results. It also called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.

Last year, the Committee on Standards in Public Life published a review that looked at the implications of AI for the seven Nolan principles of public life, and examined if government policy is up to the task of upholding standards as AI is rolled out across our public services. 

The committee’s Chair, Lord Evans, said on publishing the report:

‘Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector…. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.’

The report found that despite the GDPR, the Data Ethics Framework, the OECD principles, and the Guidelines for Using Artificial Intelligence in the Public Sector; the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be. The Committee’s report presented a number of recommendations to mitigate these risks, including 

  • greater transparency by public bodies in use of algorithms, 
  • new guidance to ensure algorithmic decision-making abides by equalities law, 
  • the creation of a single coherent regulatory framework to govern this area, 
  • the formation of a body to advise existing regulators on relevant issues, 
  • and proper routes of redress for citizens who feel decisions are unfair.

In the light of the Committee on Standards in Public Life Report, it is high time that a minister was appointed with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector, as was also recommended by the Commons Science and Technology Committee. 

We also need to consider whether – as Big Brother Watch has suggested – we should:

  • Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
  • Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
  • Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns.

This chimes with both the Mind the Gap report from the Institute for the Future of Work, which proposed an Accountability for Algorithms Act, and the Ada Lovelace Institute paper, Can Algorithms Ever Make the Grade? Both reports call additionally for a public register of algorithms, such as have been instituted in Amsterdam and Helsinki, and independent external scrutiny to ensure the efficacy and accuracy of algorithmic systems.

Post COVID, private and public institutions will increasingly adopt algorithmic or automated decision making. These will give rise to complaints requiring specialist skills beyond sectoral or data knowledge. The CDEI in its report, Bias in Algorithmic Decision Making, concluded that algorithmic bias means that the overlap between discrimination law, data protection law and sector regulations is becoming increasingly important and existing regulators need to adapt their enforcement to algorithmic decision-making. 

This is especially true of both the existing and proposed public sector ombudsman who are – or will be – tasked with dealing with complaints about algorithmic decision-making. They need to be staffed by specialists who can test algorithms’ compliance with ethically aligned design and operating standards and regulation. 

There is no doubt that to avoid unethical algorithmic decision making becoming irretrievably embedded in our public services we need to see this approach taken forward, and the other crucial proposals discussed above enshrined in new legislation.

The Constitution Society is committed to the promotion of informed debate and is politically impartial. Any views expressed in this article are the personal views of the author and not those of The Constitution Society.

Categories: AIConstitutional standards

https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/

 


Digital Technology, Trust, and Social Impact with David Puttnam

What is the role of government policy in protecting society and democracy from threats arising from misinformation? Two leading experts and members of the UK Parliament, House of Lords, help us understand the report Digital Technology and the Resurrection of Trust.

 

About the House of Lords report on trust, technology, and democracy

Michael Krigsman: We're discussing the impact of technology on society and democracy with two leading members of the House of Lords. Please welcome Lord Tim Clement-Jones and Lord David Puttnam. David, please tell us about your work in the House of Lords and, very briefly, about the report that you've just released.

Lord David Puttnam: Well, the most recent 18 months of my life were spent doing a report on the impact of digital technology on democracy. In a sense, the clue is in the title because my original intention was to call it The Restoration of Trust because a lot of it was about misinformation and disinformation.

The evidence we took, for just under a year, from all over the world made it evident the situation was much, much worse, I think, than any other committee, any of the 12 of us, had understood. I ended up calling it The Resurrection of Trust and I think that, in a sense, the switch in those words tells you how profound we decided that the issue was.

Then, of course, along comes January the 6th in Washington, and a lot of the things that we had alluded to and things that we regarded as kind of inevitable all, in a sense, came about. We're feeling a little bit smug at the moment, but we kind of called it right at the end of June last year.

Michael Krigsman: Our second guest today is Lord Tim Clement-Jones. This is his third time back on the CXOTalk. Tim, welcome back. It's great to see you again.

Lord Tim Clement-Jones: It's great to be back, Michael. As you know, my interest is very heavily in the area of artificial intelligence, but I have this crossover with David. David was not only on my original committee, but artificial intelligence is right at the heart of these digital platforms.

I speak on digital issues in the House of Lords. They are absolutely crucial. The whole area of online harms (to some quite high degree) is driven by the algorithms at the heart of these digital platforms. I'm sure we're going to unpack that later on today.

David and I do work very closely together in trying to make sure we get the right regulatory solutions within the UK context.

Michael Krigsman: Very briefly, Tim, just tell us (for our U.S. audience) about the House of Lords.

Lord Tim Clement-Jones: It is a revising chamber, but it's also a chamber which has the kind of expertise because it contains people who are maybe at the end of their political careers, if you like, with a small p, but have a big expertise, a great interest in a number of areas that they've worked on for years or all their lives, sometimes. We can draw on real experience and understanding of some of these issues.

We call ourselves a revising chamber but, actually, I think we should really call ourselves an expert chamber because we examine legislation, we look at future regulation much more closely than the House of Commons. I think, in many ways, actually, government does treat us as a resource. They certainly treat our reports with considerable respect.

Key issues covered by the House of Lords report

Michael Krigsman: David, tell us about the core issues that your report covered. Tim, please jump in.

Lord David Puttnam: I think Tim, in a sense, set it up quite nicely. We were looking at the potential danger to democracy—of misinformation, disinformation—and the degree to which the duty of care was being exercised by the major platforms (Facebook, Twitter, et cetera) in understanding what their role was in a new 21st Century democracy, both looking at the positive role they could play in terms of information, generating information and checking information, but also the negative in terms of the amplification of disinformation. That's an issue we looked at very carefully.

This is where Tim and my interests absolutely coincide because within those black boxes, within those algorithmic structures is where the problem lies. The problem century-wise—maybe this will spark people a little, I think—is that these are flawed business models. The business model that drives Facebook, Google, and others is in the advertising-related business model. That requires volume. That requires hits and what their incomes generate on the back of hits.

One of the things we tried to unpick, Michael, which was, I think, pretty important, was we took the vision that it's about reach, not about freedom of speech. We felt that a lot of the freedom of speech advocates misunderstood the problem here. Really, the problem was the amplification of misinformation which in turn benefited or was an enormous boost to the revenues of those platforms. That's the problem.

We are convinced through evidence. We're convinced that they could alter their algorithms, that they can actually dial down and solve many, many of the problems that we perceive. But, actually, it's not in their business interest to. They're trapped, in a sense between demands or requirements of their shareholders to optimize that, to optimize share value, and the role and responsibility they have as massive information platforms within a democracy.

Lord Tim Clement-Jones: Of course, governments have been extremely reluctant, in a sense, to come up against big tech in that sense. We've seen that in the competition area over the advertising monopoly that the big platforms have. But I think many of us are now much more sensitive to this whole aspect of data, behavioral data in particular.

I think Shoshana Zuboff did us all a huge benefit by really getting into detail on what she calls exhaust data, in a sense. It may seem trivial to many of us but, actually, the use to which it's put in terms of targeting messages, targeting advertising, and, in a sense, helping drive those algorithms, I think, is absolutely crucial. We're only just beginning to come to grips with that.

Of course, David and I are both, if you like, tech enthusiasts, but you absolutely have to make sure that we have a handle on this and that we're not giving way to unintended consequences.

Impact of social media platforms on society

Michael Krigsman: What is the deep importance of this set of issues that you spend so much time and energy preparing that report?

Lord David Puttnam: If you value, as certainly I do—and I'm sure we all do value—the sort of democracy we were born and brought up in, for me it's rather like carrying a porcelain bowl across a very slippery floor. We should be looking out for it.

I did a TED Talk in 2012 ... [indiscernible, 00:07:19] entitled The Duty of Care where I made the point that we use the concept of duty of care with many, many things: in the medical sense, in the educational sense. Actually, we haven't applied it to democracy.

Democracy, of all the things that we value, may end up looking like the most fragile. Our tolerance, if you like, of the growth of these major platforms, our encouragement of the reach because of the benefits of information, has kind of blindsided us to what was also happening at the same time.

Someone described the platforms as outrage factories. I'm not sure if anyone has come up with a better description. We've actually actively encouraged outrage instead of intelligent debate.

The whole essence of democracy is compromise. What these platforms do not is encourage intelligent debate and reflect the atmosphere of compromise that any democracy requires in order to be successful.

Lord Tim Clement-Jones: The problem is that the culture has been, to date, against us really having a handle on that. I think it's only now, and I think that it's very interesting to see what the Biden Administration is doing, too, particularly in the competition area.

One of the real barriers, I think, is thinking of these things in only individual harm. I think we're now getting to the point where maybe if somebody is affected by hate speech or racial slurs or whatever as individuals, then I think governments are beginning to accept that that kind of individual harm is something that we need to regulate and make sure that the platforms deal with.

I think that the area that David is raising, which is so important and there is still resistance in governments where it's, if you like, societal harms that are being caused by the platforms. Now, this is difficult to define, but the consequences could be severe if we don't get it right.

I think, across the world, you only have to look at Myanmar, for instance, [indiscernible, 00:09:33]. If that wasn't societal harm in terms of use by the military of Facebook, then I don't know what is. But there are others.

David has used the analogy of January the 6th, for instance. There are analogies and there are examples across the world where democracy is at risk because of the way that these platforms operate.

We have to get to grips with that. It may be hard, but we have to get to grips with it.

Michael Krigsman: How do you get to grips with a topic that, by its nature, is relatively vague and unfocused? Unlike individual harms, when you talk about societal harm, you're talking about very diffuse and broad impacts.

Lord David Puttnam: Michael, I sit on the Labor benches at the House of Lords and probably, unsurprising, I'm a Louey Grandise [phonetic, 00:10:27] fan, so I think the most interesting thing that's taking place at the moment is people who look back to the early part of the 20th Century and the railroads, the breaking up of the railroads, and understanding why that had to happen.

It wasn't just about the railroads. It was about the railroads' ability to block and distort all sorts of other markets. The obvious one was the coal market, but others. Then indeed it blocked and made extraordinary advances on the nature of shipping.

What I think legislators have woken up to is, this isn't just about platforms. This is actually about the way we operate as a society. The influence of these platforms is colossal, but most important of all, the fact that what we have allowed to develop is a business model which acts inexorably against our society's best interest.

That is, it inflames fringe views. It inflames misinformation. Actually, not only inflames it. It then profits from that inflammation. That can't be right.

Lord Tim Clement-Jones: Of course, it is really quite opaque because, if you look at this, the consumer is getting a free ride, aren't they? Because of the advertising, it's being redirected back to them. But it's their data which is part of the whole business model, as David has described.

It's very difficult sometimes for regulators to say, "Ah, this kind of consumer detriment," or whatever it may be. That's why you also need to look at the societal aspects of this.

If you purely look (in conventional terms) at consumer harm, then you'd actually probably miss the issues altogether because—with things like advertising monopoly, use of data without consent, and so on, and misinformation and disinformation—it is quite difficult (without looking at the bigger societal picture) just to pin it down and say, "Ah, well, there's a consumer detriment. We must intervene on competition grounds." That's why, in a sense, we're all now beginning to rewrite the rules so that we do catch these harms.

Balancing social media platforms rights against the “duty of care”

Michael Krigsman: We have a very interesting point from Simone Jo Moore on LinkedIn who is asking, "How do you strike this balance between intelligent questioning and debate versus trolling on social media? How should lawmakers and policymakers deal with this kind of issue?

Lord David Puttnam: We came up with, we identified an interesting area, if you like, of comprise – for want of a better word. As I say, we looked hard at the impact on reach.

Now, Facebook, if you were a reasonably popular person on Facebook, you can quite quickly have 5,000 people follow what you're saying. At that point, you get a tick.

It's clear to us that the algorithm is able to identify you as a super-spreader at that point. What we're saying is, at that moment not only have you got your tick but you then have to validate and verify what it is you're saying.

That state of outrage, if you like, is what blocks the 5,000 and then has to be explained and justified. That seemed to us an interesting area to begin to explore. Is 5,000 the right number? I don't know.

But what was evident to us is the things that Tim really understands extremely well. These algorithmic systems inside that black box can be adjusted to ensure that, at a certain moment, validation takes place. Of course, we saw it happen in your own election that, in the end, warnings were put up.

Now, you have to ask yourself, why wasn't that done much, much, much sooner? Why? Because we only reasonably recently became aware of the depth of the problem.

In a sense, the whole Russian debacle in the U.S. in the 2016 election kind of got us off on the wrong track. We were looking at the wrong place. It wasn't what Russia had done. It was what Russia was able to take advantage of. That should have been the issue and it us a long time to get there.

Lord Tim Clement-Jones: That's why, in a sense, you need new ways of thinking about this. It's the virality of the message, exactly as David has talked about, the super-spreader.

I like the expression used by Avaaz in their report that came out last year looking at, if you like, the anti-vaxx messages and the disinformation over the Internet during the COVID pandemic. They talked about detoxing the algorithm. I think that's really important.

In a sense, I don't think it's possible to lay down absolutely hard and fast rules. That's the benefit of the duty of care that it is a blanket legal concept, which has a code of practice, which is effectively enforced by a regulator. It means that it's up to the platform to get it right in the first place.

Then, of course – David's report talked about it – you need forms of redress. You need a kind of ombudsman, or whatever may be the case, independent of the platforms who can say, "They got it wrong. They allowed these messages to impact on you," and so on and so forth. There are mechanisms that can be adopted, but at the heart of it, as David said, is this black box algorithm that we really need to get to grips with.

Michael Krigsman: You've both used terms that are very interestingly put together, it seems to me. One, Tim, you were just talking about duty of care. David, you've raised (several times) this notion of flawed business models. How do these two, duty of care and the business model, intersect? It seems like they're kind of diametrically opposed.

Lord David Puttnam: It depends on your concept of what society might be, Michael. The type of society I spent my life arguing for, they're not opposed at all, the role of the peace, because that society would have a combination of regulation, but also personal responsibility on the part of the people who run businesses.

One of the things that I think Tim and I are going to be arguing for, which we might have problems in the UK, is the notion of personal responsibility. At what point do the people who sit on the board at Facebook have a personal responsibility for the degree to which they exercise duty of care over the malfunction of their algorithmic systems?

Lord Tim Clement-Jones: I don't see a conflict either, Michael. I think that you may see different regulators involved. You may see, for instance, a regulator imposing a way of working over content, user-generated content on a platform. You may see another regulator (more specialist, for instance) on competition. I think it is going to be horses for courses, but I think that's the important thing to make sure that they cooperate.

I just wanted to say that I do think that often people in this context raised the question of freedom of expression. I suspect that people will come on the chat and want to raise that issue. But again, I don't see a conflict in this area because we're not talking about ordinary discourse. We're talking about extreme messages: anti-vaxxing, incitement of violence, and so on and so forth.

The one thing David and I absolutely don't want to do is to impede freedom of expression. But that's sometimes used certainly by the platforms as a way of resisting regulation, and we have to avoid that.

How to handle the cross-border issues with technology governance?

Michael Krigsman: We have another question coming now from Twitter from Arsalan Khan who raises another dimension. He's talking about if individual countries create their own policies on societal harm, how do you handle the cross-border issues? It seems like that's another really tricky one here.

Lord David Puttnam: I think what is happening, and this is quite determined, I think, on the part of the Biden Administration—the UK and, actually, Europe, the EU, is probably further advanced than anybody else on this—is to align our regulatory frameworks. I think that will happen.

Now, in a sense, these are big marketplaces. The Australian situation with Facebook has stimulated this. Once you get these major markets aligned, it's extremely hard to see how Facebook, Google, and the rest of them could continue with their advertising with their current model. They would have to adjust to what those marketplaces require.

Bear in mind, what troubles me a lot, Michael, is that, if you think back, Mr. Putin and President Xi must be laughing their heads off at the mess we got ourselves into because they've got their own solution to this problem – a lovely, simple solution.

We've got our knickers in a twist in an extraordinary situation quite unintended in most states. The obligation is on the great Western democracies to align the regulatory frameworks and work together. This can't be done on a country-by-country basis.

Lord Tim Clement-Jones: Once the platforms see the writing on the wall, in a sense, Michael, I think they will want to encourage people to do that. As you know, I've been heavily involved in the AI ethics agenda. That is coming together on an international basis. This, if anything, is more immediate and the pressures are much greater. I think it's bound to come together.

It's interesting that we've already had a lot of interest in the duty of care from other countries. The UK, in a sense, is a bit of a frontrunner in this despite the fact that David and I are both rather impatient. We feel that it hasn't moved fast enough.

Nevertheless, even so, by international standards, we are a little bit ahead of the game. There is a lot of interest. I think, once we go forward and we start defining and putting in regulation, that's going to be quite a useful template for people to be able to legislate.

Lord David Puttnam: Michael, it's worth mentioning that it's interesting how things bubble up and then become accepted. When the notion of fines of up to 10% of turnover was first mooted, people said, "What?! What?!"

Now, that's regarded as kind of a standard around which people begin to gather, so there is momentum. Tim is absolutely right. There is momentum here. The momentum is pretty fierce.

Ten percent of turnover is a big fine. If you're sitting on a board, you've got to think several times before you sign up on that. That's not just the cost of doing business.

Michael Krigsman: Is the core issue then the self-interest of platforms versus the public good?

Lord David Puttnam: Yes, essentially it is. Understand, if you look back and look at the big anti-trust decisions that were made in the first decade of the 20th Century. I think we're at a similar moment and, incidentally, I think that it is as certain that these things will be resolved within the next ten years in a very similar manner.

I think it's going to be up to the platforms. Do they want to be broken up? Do they want to be fined? Or do they want to get rejoined in society?

Lord Tim Clement-Jones: Yeah, I mean I could get on and really bore everybody with the different forms of remedies available to our competition regulators. But David talked about big oil, which was broken up by what are called structural remedies.

Now, it may well be that, in the future, regulators—because of the power of the tech platforms—are going to have to think about exactly doing that, say, separating Facebook from YouTube or from Instagram, or things of that sort.

We're not out of the era of "move fast and break things." We now are expecting a level of corporate responsibility from these platforms because of the power they wield. I think we have to think quite big in terms of how we're going to regulate.

Should governments regulate social media?

Michael Krigsman: We have another comment from Twitter, again from Arsalan Khan. He's talking about, do we need a new world order that requires technology platforms to be built in? It seems like as long as you've got this private sector set of incentives versus the public good, then you're going to be at loggerheads. In a practical way, what are the solutions, the remedies, as you were just starting to describe?

Lord Tim Clement-Jones: What are governments for? Arsalan always asks the most wonderful questions, by the way, as he did last time.

What are governments for? That is what the role of government is. It is, in a sense, a brokerage. It's got to understand what is for the benefit of, if you like, society as a whole and, on the other hand, what are the freedoms that absolutely need preserving and guaranteeing and so on.

I would say that we have some really difficult decisions to make in this area. But David and I come from the point of view of actually creating more freedom because the impact of the platforms (in many, many ways) will be to reduce our freedoms if we don't do something about it.

Lord David Puttnam: It's very, very much, and that's why I would argue, Michael, that the Facebook reaction or response in Australia was so incredibly clumsy because what it did is it begged a question we could really have done without, which is, are they more powerful than the sovereign nations?

Now, you can't go there because you get the G7 together or the G20 together, you know, you're not going to get into a situation where any prime minister is going to concede that, actually, "I'm afraid there's nothing we can do about these guys. They're bigger than us. We're just going to have to live with it." That's not going to happen.

Lord Tim Clement-Jones: The only problem there was the subtext. The legislation was prompted by one of the biggest media organizations in the world. In a sense, I felt pretty uncomfortable taking sides there.

Lord David Puttnam: I think it was just an encouragement to create a new series of an already long-running TV series.

Lord Tim Clement-Jones: [Laughter]

Lord David Puttnam: You're absolutely right about that. I had to put that down as an extraordinary irony of history. The truth is you don't take on nations, and many have.

Some of your companies have and genuinely believe that they were bigger. But I would say don't go there. Frankly, if I were a shareholder in Facebook – I'm not – I'd have been very, very, very cross with whoever made that decision. It was stupid.

Michael Krigsman: Where is all of this going?

Lord Tim Clement-Jones: We're still heavily engaged in trying to get the legislation right in the UK. But David and I believe that our role is to kind of keep government honest and on track and, actually, go further than they've pledged because this question of individual harm, remedies for that, and a duty of care in relation to individual harm isn't enough. It's got to go broader into societal harm.

We've got a road to travel. We've got draft legislation coming in very, very soon this spring. We've got then legislation later on in the year, but actually getting it right is going to require a huge amount of concentration.

Also, we're going to have to fight off objections on the basis of freedom of expression and so on and so forth. We are going to have to reroute our determination in principle, basically. I think there's a great deal of support out there, particularly in terms of protection of young people and things of that sort that we're actually determined to see happen.

Political messages and digital literacy

Michael Krigsman: Is there the political will, do you think, to follow through with these kinds of changes you're describing?

Lord David Puttnam: In the interest of a vibrant democracy, when any prime minister or president of any country looks at the options, I think they're facing many alternatives. I can't really imagine Macron, Johnson, or anybody else looking at the options available to them.

They may find those options quite uncomfortable, and the ability in some of these platforms to embarrass politicians is considerable. But when they actually look at the options, I'm not sure they're faced with that many alternatives other than pressing down the vote that Tim just laid out for you.

Lord Tim Clement-Jones: I think the real Achilles heel, though, that David's report pointed out really clearly, and the government failed to answer satisfactorily, was the whole question of electoral regulation, basically. The use of misleading political messaging during elections, the impact of, if you like, opaque political messaging where it's not obvious where it's coming from, those sorts of things.

I think the determination of governments, especially because they are in control and they are benefiting from some of that messaging, there's a great reluctance to take on the platforms in those circumstances. Most platforms are pretty reluctant to take down any form of political advertising or messaging or, in a sense, moderate political content.

That for me is the bit that I think is going to be the agenda that we'll probably be fighting on for the next ten years.

Lord David Puttnam: Michael, it's quite interesting that both of the major parties – not Tim's party, as you behave very well – both of the major parties actually misled us. I wouldn't say lied to us, but they misled us in the evidence they gave about their use of the digital environment during an election, which was really lamentable. We called them out, but the fact that, in both places, they felt that they needed to, as necessary, break the law to give themselves an edge is a very worrying indicator of what we might be up against here.

Lord Tim Clement-Jones: The trouble is, political parties love data because targeted messages, microtargeting as it's called, is very powerful, potentially, and gaining support. It's like a drug. It's very difficult to wean politicians off what they see as a new, exciting tool to gain support.

Michael Krigsman: I work with various software companies, major software companies. Personalization based on data is such a major focus of technology, of every aspect of technology with tentacles to invade our lives. When done well, it's intuitive and it's helpful. But you're talking about the often indistinguishable case where it's done invasively and insinuating itself into the pattern of our lives. How do you even start to grapple with that?

Lord Tim Clement-Jones: It kind of bubbled up in the Cambridge Analytica case where the guy who ran the company was stupid enough to boast about what they were able to do. What it illustrated is that that was the tip of a very, very worrying nightmare for all of us.

No, I mean this is where you come back to individual responsibility. The idea that the people, the management of Facebook, the management of Google are not appalled by that possibility and aren't doing everything they can to prevent is, I think it's what gives everyone at Twitter nightmares.

I don't think they ever intended or wanted to have the power they have in these fringe areas, but they're stuck with them. The answer is, how do we work with governments to make sure they're minimized?

Lord Tim Clement-Jones: This, Michael, brings in one of David and my favorite subjects, which is digital literacy. I'm an avid reader of people who try and buck the trend. I love Jaron Lanier's book Ten Reasons for Deleting your Facebook Account [sic]. I love the book by Carissa Veliz called Privacy is Power.

Basically, that kind of understanding of what you are doing when you sign up to a platform—when you give your data away, when you don't look at the terms and conditions, you tick the boxes, you accept all cookies, all these sorts of things—it's really important that people understand the consequences of that. I think it's only a tiny minority who have this kind of idea they might possibly live off-grid. None of us can really do that, so we have to make sure that when we live with it, we are not giving away our data in those circumstances.

I don't practice what I preach half the time. We're all in a hurry. We all want to have a look at what's on that website. We hit the accept all cookies button or whatever it may be, and we go through. We've got to be more considerate about how we do these things.

Lord David Puttnam: Chapter 7 of our report is all about digital literacy. We went into it in great depth. Again, fairly lamentable failure by most Western democracies to address this.

There are exceptions. Estonia is a terrific exception. Finland is one of the exceptions. They're exceptions because they understand the danger.

Estonia sits right on the edge with its vast neighbor Russia with 20% of its population being Russian. It can't afford misinformation. Misinformation for them is catastrophe. Necessarily, they make sure their young people are really educated in the way in which they receive information, how they check facts.

We are very complacent in the West; I've got to say. I'll say this about the United States. We're unbelievably complacent in those areas and we're going to have to get smart. We've got to make sure that young people get extremely smart about the way they're fed and react and respond to information.

Lord Tim Clement-Jones: Absolutely. Our politics, right across the West, demonstrate that there's an awful lot of misinformation, which is believed – believed as the gospel, effectively.

Balancing freedom of speech on social media and cyberwarfare

Michael Krigsman: We have another question from Twitter. How do you balance social media reach versus genuine freedom of speech?

Lord David Puttnam: I thought I answered it. Obviously, I didn't. It's that you accept the fact that freedom of speech requires that people can say what they want. This goes back to the black boxes. At a certain moment, the box intervenes and says, "Whoa. Just a minute. There is no truth in what you're saying, " or worse on the case of anti-vaxxers. "There is actual harm and damage in what you're saying. We're not going to give you reach."

What you do is you limit reach until the person making those statements can validate them or affirm them or find some other way of, as it were, being allowed to amplify. It's all about amplification. It's trying to stop the amplification of distortion and lies and really quite dangerous stuff like the anti-vaxx.

We've got a perfect trial run, really, with anti-vaxxing. If we can't get this right, we can't get much right.

Lord Tim Clement-Jones: There are so many ways. When people say, "Oh, how do we do this?" you've got sites like Reddit who have a community, different communities. You have rules applying to the communities that have to conform to a particular standard.

Then you've got the Avaaz not only detoxing the algorithm, but the duty of correction. Then you've got great organizations like NewsGuard who basically, in a sense, have a sort of star system, basically, to verify some of the accuracy of news outlets. We do have the tools, but we just have to be a bit determined about how we use them.

Michael Krigsman: We have another question from Twitter that I think addresses or asks about this point, which is, how can governments set effective constraints when partisan politics benefits from misusing digital technologies and even spreading misinformation?

Lord David Puttnam: Tim laid it out for you early on why the House of Lords existed. This is where it actually gets quite interesting.

We, both Tim and I, during our careers—and we both go back, I think, 25 years—had managed to get amendments into legislation against the head. That's to say that didn't suit either the government of the day or even the lead opposition of the day. The independence of the House of Lords is wonderfully, wonderfully valuable. It is expert and it does listen.

Just a tiny example, if someone said to me or David, "Why were you not surprised that your report didn't get more traction?" it's 77,000 words long. Yeah, it's 77,000 words long because it's a bloody complicated subject. We had the time and the luxury to do it properly.

I don't think that will necessarily prove to be a stumbling block. We have enough ... [indiscernible, 00:37:01] embarrassment. The quality of the House of Lords and the ability to generate public opinion, if you like, around good, sane, sensible solutions still do function within a democracy.

But if you go down the road that Tim was just saying, if you allow the platforms to go in the route they appear to have taken, we'll be dealing with autocracy, not democracy. Then you're going to have a set of problems.

Lord Tim Clement-Jones: David is so right. The power of persuasion still survives in the House of Lords. Because the government doesn't have a majority, we can get things done if that power of persuasion is effective. We've done that quite a few times over the last 25 years, as David says.

Ministers know that. They know that if you espouse a particular cause that is clearly sensible, they're going to find that they're pretty sticky wicked or whatever the appropriate baseball analogy would be, Michael, in those circumstances. We have had some notable successes in that respect.

For instance, only a few years ago, we had a new code for age-appropriate design, which means that webpages now need to take account of the age of the individuals actually accessing them. It's now called a Children's Code. It came into effect last year and it's a major addition to our regulation. It was quite heavily resisted by the platforms and others when it came in, but by a single colleague of David and mine (supported by us) she drove it through, greatly to her credit.

Michael Krigsman: We have two questions now, one on LinkedIn and one on Twitter, that relates to the same topic. That is the speed of government, the speed of change and government's ability to keep up. On Twitter, for example, future wars are going to be cyber, and the government is just catching up. The technology is changing so rapidly that it's very difficult for the legal system to track that. How do we manage that aspect?

Lord Tim Clement-Jones: Funny enough, government think that. Their first thought is about cybersecurity. Their first thought is about their cyber, basically, their data.

We've got a new, brand new, national cybersecurity center about a year or two old now. The truth is, particularly in view of Russian activities, we now have quite good cyber controls. I'm not sure that our risk management is fantastic but, operationally, we are pretty good at this.

For instance, things like the solar winds hack of last year have been looked at pretty carefully. We don't know what the outcome is, but it's been looked at pretty carefully by our national cybersecurity center.

Strangely enough, the criticism I have with government is, if only they thought of our data in the way that they thought about their data, we'd all be in a much happier place, quite honestly.

Lord David Puttnam: I think that's true. Michael, I don't know whether this is absolutely true in the U.S. because it's such a vast country, but my experience of legislation is it can be moved very quickly when there's an incident. Now, I'll give you an example.

I was at the Department of Education at the moment where a baby was allowed to die under very unfortunate, catastrophic failure by different systems of the government. The entire department ground to a halt for about two months while this was looked at and whilst the government, whilst the department tried to explain itself and any amount of legislation was brought forward. Governments deal in crises, and this is going to be a series of crises.

The other thing governments don't like is judicial review. I think we're looking at an area here where judicial review—either by the platforms for a government decision or by civil society because of a government decision—is utterly inevitable. I actually think, longer-term, these big issues are going to be decided in the courts.

Advice for policymakers and business people

Michael Krigsman: As we finish up, can I ask you each for advice to several different groups? First is the advice that you have for governments and for policymakers.

Lord Tim Clement-Jones: Look seriously at societal harms. I think the duty of care is not enough just simply to protect individual citizens. It is all about looking at the wider picture because if you don't, then you're going to find it's too late and your own democracy is going to suffer.

I think you're right, Michael, in a sense that some politicians appear to have a conflict of interest on this. If you're in control, you don't think of what it's like to have the opposition or to be in opposition. Nevertheless, that's what they have to think about.

Lord David Puttnam: I was very impressed, indeed, tuning in to some of the judicial subcommittees at the congressional hearings on the platforms. I thought that the chairman ... [indiscernible, 00:42:35] did extremely well.

There is a lot of expertise. You've got more expertise, actually, Michael, in your country than we have in ours. Listen to the experts, understand the ramifications, and, for God's sake, politicians, it's in their interests, all their interests, irrespective of Republicans or Democrats, to get this right because getting it wrong means you are inviting the possibility of a form of government that very, very, very few people in the United States wish to even contemplate.

Michael Krigsman: What about advice to businesspeople, to the platform owners, for example?

Lord David Puttnam: Well, we had an interesting spate, didn't we, where a lot of advertisers started to take issue with Facebook, and that kind of faded away. But I would have thought that, again, it's a question of regulatory oversight and businesses understanding.

How many businesses in the U.S. want to see democracy crumble? I mean I was quite interested immediately after the January 6th thing for where the businesses walked away from, not so much the Republican party, but away from Trump.

I just think we've got to begin to hold up a mirror to ourselves and also look carefully at what the ramifications of getting it wrong are. I don't think there's a single business in the U.S. (or if there are, there are very, very few) who wish to go down that road. They're going to realize that that means they've got to act, not just react.

Lord Tim Clement-Jones: I think this is a board issue. This is the really important factor.

Looking on the other side, not the platform side because I think they are only too well aware of what they need to do, but if I'm on the other side and I'm, if you like, somebody who is using social media, as a board member, you have to understand the technology and you have to take the time to do that.

The advertising industry—really interesting, as David said—they're developing all kinds of new technology solutions like blockchain to actually track where their advertising messages are going. If they're directed in the wrong way, they find out and there's an accountability down the blockchain which is really smart in the true sense of the word.

It's using technology to understand technology, which I think you can't leave it to the chief information officer or the chief technology officer. You as the CEO or the chair, you have to understand it.

Lord David Puttnam: Tim is 100% right. I've sat in a lot of boards in my life. If you really want to grab a board's attention – I'm not saying which part of the body you're going to grab – start looking at the register and then have a conversation about how adequate directors' insurance is. It's a very lively discussion.

Lord Tim Clement-Jones: [Laughter]

Lord David Puttnam: I think this whole issue of personal responsibility, the things that insurance companies will and won't take on in terms of protecting companies and boards, that's where a lot of this could land and very interestingly.

Importance of digital education

Michael Krigsman: Let's finish up by any thoughts on the role of education and advice that you may have for educators in helping prepare our citizens to deal with these issues.

Lord Tim Clement-Jones: Funny enough, I've just developed (with a group of people) a framework for ethical AI for use in education. We're going to be launching that in March.

The equivalent is needed in many ways because of course digital literacy, digital education is incredibly important. Actually, parents and teachers, this isn't just a generation, a younger generational issue. It needs to go all the way through. I think we need to actually be much more proactive about the tools that are out there for parents and others, even main board directors.

You cannot spend enough time talking about the issues. That's why, when David mentioned Cambridge Analytica, suddenly everybody gets interested. But it's a very rare example of suddenly people becoming sensitized to an issue that they previously didn't really think about.

Lord David Puttnam: It's a parallel, really, in the sense of climate change. These are our issues. If we're going to prepare our kids – I've got three grandchildren – if we're going to prepare them properly for the remains of their lives, we have an absolute obligation to explain to them what the challenges their lives will far are, what forms of society they're going to have to rally around, what sort of governance they should reasonably expect, and how they'll participate in all of that.

If they're left in ignorance—be it on climate change or, frankly, on all the issues we've been discussing this evening—we are making them incredibly vulnerable to a form of challenge and a form of life that we've lived very privileged lives. I think that the lives of our grandchildren, unless we get this right for them and help them, will be very diminished.

I use that word a lot recently. They will live diminished lives and they'll blame us, and they'll wonder why it happened.

Michael Krigsman: Certainly, one of the key themes that I've picked up from both of you during this conversation has been this idea of responsibility, individual responsibility for the public welfare.

Lord David Puttnam: Unquestionable. It's summed up in the various duty of care. We have an absolutely overwhelming duty of care for future generations, and it applies as much to the digital environment as it does to climate.

Lord Tim Clement-Jones: Absolutely. In a way, what we're now having to overturn is this whole idea that online was somehow completely different to offline, to the physical world. Well, some of us have been living in the online remote world for the whole of last year, but why should standards be different in that online world? They shouldn't be. We should expect the same standards of behavior and we should expect people to be accountable for that in the same way as they are in the offline world.

Michael Krigsman: Okay. Well, what a very interesting conversation. I would like to express my deep thank you to Lord Tim Clement-Jones and Lord David Puttnam for joining us today.

David, before we go, I just have to ask you. Behind you and around you are a bunch of photographs and awards that seem distant from your role in the House of Lords. Would you tell us a little bit more about your background very quickly?

Lord David Puttnam: Yes. I was a filmmaker for many years. That's an Emmy sitting behind me. The reason the Emmy is sitting there is the shelf isn't deep enough to take it. But I got my Oscar up there. I've got four or five Golden Globes and three or four BAFTAs, David di Donatello, and Palme d'Or from Cannes. I had a very, very happy, wonderfully happy 30 years in the movie industry, and I've had a wonderful 25 years working with Tim in the legislature, so I'm a lucky guy, really.

https://www.cxotalk.com/episode/digital-technology-trust-social-impact