Catalizing Cooperation:Working Together Accross AI Governance Initiatives
Here is the text and video of what I said at this stimulating and useful event hosted by the International Congress for the Goverance of AI
https://www.youtube.com/watch?v=z_uji0LolLA
It is now my pleasure to introduce Lord Clement-Jones, also in a video presentation. He is the former chair of the House of Lords Select Committee on AI. He is the co-chair of the All-Party Parliamentary Group on AI, and he is a founding member of the OECD Parliamentary Group on AI and a member of the Council of Europe's ad hoc Committee on AI (CAHAI).
LORD TIM CLEMENT-JONES: Hello. It is great to be with you.
Today I am going to try to answer questions such as: What kind of international AI governance is needed? Can we build on existing mechanisms? Or does some new body need to be created?
As the House of Lords in our follow-up report, "AI in the UK: No Room for Complacency," last December strongly emphasized, it has never been clearer, particularly after this year of COVID-19 and our ever-greater reliance on digital technology, that we need to retain public trust in the adoption of AI, particularly in its more intrusive forms, and that this is a shared issue internationally. To do that, we need, whilst realizing the opportunities, to mitigate the risks involved in the application of AI, and this brings with it the need for clear standards of accountability.
The year 2019 was the year of the formulation of high-level ethical principles in the field of AI by the OECD, the European Union, and the G20. These are very comprehensive and provide the basis for a common set of international standards. For instance, they all include the need for explainability of decisions and an ability to challenge them, a process made more complex when decisions are made in the so-called "black box" of neural networks.
But it has become clear that voluntary ethical guidelines, however much they are widely shared, are not enough to guarantee ethical AI, and there comes a point where the risks attendant on noncompliance with ethical principles is so high that policymakers need to accept that certain forms of AI development and adoption require enhanced governance and/or regulation.
The key factor in 2020 has been the work done at international level in the Council of Europe, OECD, and the European Union towards putting these principles into practice in an approach to regulation which differentiates between different levels of risk and takes this into account when regulatory measures are formulated.
Last spring the European Commission published its white paper on the proposed regulation of AI by a principle-based legal framework targeting high-risk AI systems. As the white paper says, a risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question or not of whether they are high-risk. The determination of what is a high-risk AI application should be clear, easily understandable, and applicable for all parties concerned.
In the autumn the European Parliament adopted its framework for ethical AI to be applicable to AI, robotics, and related technologies developed, deployed, and/or used within the European Union. Like the Commission's white paper, this proposal also targets high-risk AI. As well as the social and environmental aspects notable in this proposed ethical framework is the emphasis on human oversight required to achieve certification.
Looking through the lens of human rights, including democracy and the rule of law, the CAHAI last December drew up a feasibility study for regulation of AI, which likewise advocates a risk-based approach to regulation. It considers the feasibility of a legal framework for AI and how that might best be achieved. As the study says, these risks, however, depend on the application, context, technology, and stakeholders involved. To counter any stifling of socially beneficial AI innovation and to ensure that the benefits of this technology can be reaped fully while adequately tackling its risks, the CAHAI recommends that a future Council of Europe legal framework on AI should pursue a risk-based approach targeting the specific application context, and work is now ongoing to draft binding and non-binding instruments to take the study forward.
If, however, we aspire to a risk-based regulatory and governance approach, we need to be able to calibrate the risks, which will determine what level of governance we need to go to. But, as has been well illustrated during the COVID-19 pandemic, the language of risk is fraught with misunderstanding. When it comes to AI technologies we need to assess the risks by reference to the nature of AI applications and the context of their use. The potential impact and probability of harm, the importance and sensitivity of use of data, the application within a particular sector, the affected stakeholders, the risks of non-compliance, and whether a human in the loop mitigates risk to any degree.
In this respect, the detailed and authoritative classification work carried out by another international initiative, the OECD Network of Experts on AI working group, so-called "ONE AI," on the classification of AI systems comes at a crucial and timely point. This gives policymakers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context, i.e., sector, stakeholder, purpose, etc.; data and input; AI model, i.e., neural or linear, supervised or unsupervised; and tasks and output, i.e., what does the AI do? It ties in well with the Council of Europe feasibility work.
When it comes to AI technologies we need to assess the risks by reference to the nature of the AI applications and their use, and this kind of calibration, a clear governance hierarchy, can be followed depending on the level of risk assessed. Where the risk is relatively low, a flexible approach such as a voluntary ethical code without a hard compliance mechanism, can be envisaged, such as those enshrined in the international ethical codes mentioned earlier.
Where the risk is a step higher, enhanced corporate governance using business guidelines and standards with clear disclosure and compliance mechanisms needs to be instituted. Already at international level we have guidelines on government best practice, such as the AI procurement guidelines developed by the World Economic Forum, and these have been adopted by the UK government. Finally we may need to introduce comprehensive regulation, such as that which is being adopted for autonomous vehicles, which is enforceable by law.
Given the way the work of all of these organizations is converging, the key question of course is whether on the basis of this kind of commonly held ethical evaluation and risk classification and assessment there are early candidates for regulation and to what extent this can or should be internationally driven. Concern about the use of live facial recognition technologies is becoming widespread with many U.S. cities banning its use and proposals for its regulation under discussion in the European Union and the United Kingdom.
Of concern too are technologies involving deep fakes and algorithmic decision making in sensitive areas, such as criminal justice and financial services. The debate over hard and soft law in this area is by no means concluded, but there is no doubt that pooling expertise at international level could bear fruit. A common international framework informed by the work so far of the high-level panel on digital cooperation, the UN Human Rights Council, and their AI for Good platform, and brokered by UNESCO, where an expert group has been working on a recommendation on the ethics of artificial intelligence. The ITU or the United Nations itself, which in 2019 established a Centre for Artificial Intelligence and Robotics in the Netherlands, could be created, and this could gain public trust for establishing that adopters are accountable for high-risk AI applications and at the same time allay concerns that AI and other digital technologies are being over-regulated.
Given that our aim internationally on AI governance must be to ensure that the cardinal principle is observed that AI needs to be our servant and not our master, there is cause for optimism that experts, policymakers, and regulators now recognize that they have a duty to ensure that whatever solution they adopt they recognize ascending degrees of AI risk and that policies and solutions are classified and calibrated accordingly.
Regulators themselves are now becoming more of a focus. Our House of Lords report recommended regulator training in AI ethics and risk assessment, and I believe that this will become the norm. But even if at this juncture we cannot yet identify a single body to take the work forward, there is clearly a growing common international AI agenda, and—especially I hope with the Biden administration coming much more into the action—we can all expect further progress in 2021.
Thank you.
Lord C-J: ‘Byzantine’ to ‘inclusive’: status update on UK digital ID
From Biometricupdate.com
https://www.biometricupdate.com/202108/byzantine-to-inclusive-status-update-on-uk-digital-id
The good, the bad and the puzzling elements of the UK’s digital ID project and landscape were discussed by a group of stakeholders who found the situation frustrating at present, but believe recent developments offer hope for a ‘healthy ecosystem’ of private digital identity providers and parity between physical and digital credentials. Speakers also compared UK proposals with schemes emerging elsewhere, praising the EU digital wallet approach.
The panel was convened by techUK, a trade association focusing on the potential of digital technologies, against a backdrop of recent announcements by the UK’s Department for Digital, Culture, Media and Sport such as the ‘Digital identity and attributes consultation’ into the ongoing framework underpinning the move to digital, along with the slow-moving legislation on the digital economy.
“We seem to be devising some Byzantine pyramid of governance,” said Lord Tim Clement-Jones, House of Lords Spokesperson for Digital for the Liberal Democrats of the overall UK plan for digital ID on the multiple oversight and auditing bodies proposed. And that looking at the ‘Digital identity and attributes’ documentation “will blow your mind,” such is his feeling of frustration around the topic. He believes legislation on the Digital Economy Act 2017 Part 3 should have been passed long ago allowing providers such as Yoti to bring age verification solutions to the market.
Fellow panellist Julie Dawson, Director of Regulatory and Policy at Yoti was more optimistic about the current state of affairs. She noted the fact that 3.5 million people had used the UK’s EU Exit: ID Document Check app which included biometric verification to be highly encouraging as are the Home Office sandbox trials for digital age verification. However, the lack of a solid digital ID could put British people to a disadvantage, even in the UK, if they cannot verify themselves online such as in the hiring process. Yet people performing manual identity checks are expected to verify a driving license from another country, that they have never seen before, and make a decision on it – something she finds “theatrical.”
The panel, which also featured Laura Barrowcliff, Head of Strategy at digital identity provider GBG Plc was heavily skewed towards the private sector, including the chair, Margaret Moore, Director of Citizen & Devolved Government Services at French firm Sopra Steria which has recently been awarded a contract within France’s digital ID system. They agreed that the UK needs and is developing a healthy ecosystem of digital identity providers, that the ‘consumer’ should be at the heart of the system, that the private sector is an inherently necessary part of the future digital ID landscape.
The government’s role is to establish trust by setting the standards private firms must adhere to, believes Lord Clement-Jones and that it “should be opening up government services to third party digital ID”. He is opposed to the notion of a government-run digital ID system based on the outcome of the UK’s ineffective Verify scheme.
Lord Clement-Jones considers the current flow of evidence-gathering and consultations in the UK to be a “slow waltz”, particularly in light of the recent EU proposals for a digital wallet which is “exactly what is needed” as it is “leaving it to the digital marketplace”. He believes the lack of a “solid proposal” so far by the UK government is hampering the establishment of trust.
“The real thing we have to avoid is for social media to be the arbiters of digital ID. This is why we have to move fast,” said Clement-Jones, “I do not want to be defined by Google or Facebook or Instagram in terms of who I am. Let alone TikTok.” Which is why the UK needs digital commercial providers, noted the member of the House of Lords.
Yoti’s Julie Dawson believes the EU proposals could even see the bloc leapfrogging other jurisdictions with the provision for spanning the public and private sectors. The inclusion of ‘vouching’ in the UK system, where somebody without formal identity could turn to a known registered professional to vouch for them and allow them to register some form of digital ID was found to be highly encouraging. This could make the UK system more inclusive.
Data minimization should be a key part of the UK plan, where only the necessary attribute of somebody’s ID is checked, such as whether they are over 18, compared to handing over a passport or sending a scan which contains multiple other attributes which are not necessary for the seller to see. GBG’s Laura Barrowcliff said this is a highly significant benefit of digital ID and one which, if communicated to the public, could increase support for and trust in digital ID. Any reduction in fraud associated with the use of digital ID could also help sway public opinion, though multiple panellists noted that there will always be elements of identity fraud.
Yoti’s Dawson raised a concern that the current 18-month wait until any legislation comes from the framework and consultations could become lost time to developers and hopes they continue to enhance their offerings. She also called for further transparency in the discussions happening in government departments.
Lord Clement-Jones hopes for the formation of data foundations to manage publicly-held information so the public knows where data is held and how. GBG’s Laura Barrowcliff simply called for simplicity in the ongoing development of the digital ID landscape to keep consumers at the heart so that they can understand the changes and potential and buy into the scheme as their trust grows.
Digital ID: What’s the current state-of-play in the UK?
On 22 July, as part of the #DigitalID2021 event series, techUK hosted an insightful discussion exploring the current state-of-play for digital identity in the UK and how to build public trust in digital identity technologies. The panel also examined how the UK’s progress on digital ID compares with international counterparts and set out their top priorities to support the digital identity market and facilitate wider adoption.
The panel included:
- Lord Tim Clement-Jones, House of Lords Spokesperson for Digital for the Liberal Democrats
- Margaret Moore, Director of Citizen & Devolved Government Services, Sopra Steria (Chair)
- Julie Dawson, Director of Regulatory and Policy, Yoti
- Laura Barrowcliff, Head of Strategy, GBG Plc
You can watch the full webinar here or read our summary of the key insights below:
The UK’s progress on digital identity
Opening the session, the panel discussed progress around digital identity since the start of the pandemic.
Julie Dawson raised a number of developments that indicate steps in the right direction. Before the pandemic over 3.5m EU citizens proved their settled status via the EU Settlement Scheme, whilst the JMLSG and Land Registry have both since explicitly recognised digital identity, with digital right to work checks and a Home Office sandbox on age verification technologies in alcohol sales also introduced since March last year. She also lauded the creation of the Digital Regulation Cooperation Forum as a great example of joining up across government departments, such as on the topic of age assurance.
Lord Tim Clement-Jones on the other hand noted that the pace of change has remained slow. He said that the UK government needs to take concrete action and should focus on opening up government data to third party providers. He also made the point that the u-turn on the Digital Economy Act Part 3 has not as yet been rectified and so the manifesto pledge to protect children online has still to be fulfilled. Julie pointed out that legislative change in terms of the Mandatory Licensing Conditions are still needed, to enable a person to prove their age to purchase alcohol without solely requiring a physical document with a physical hologram.
Collaboration across industry around digital identities was also highlighted by Julie, drawing upon the example of the Good Health Pass Collaborative which has emerged since the start of the pandemic. The Collaborative has brought together a variety of stakeholders and over 130 companies to work on an interoperable digital identity solution to facilitate international travel post-COVID to operate at scale once more.
Examining the Government alpha Trust Framework and latest consultation
Moving on to look at the government’s alpha Trust Framework for digital identity, as well as the newly published consultation on digital identity and attributes, the panel explored what these documents do well and what gaps ultimately remain.
Julie Dawson and Laura Barrowcliff both saw a lot of good in the new proposals, with Laura highlighting how the priorities in the government’s approach around governance, inclusion and interoperability broadly hit on the right points. Julie also highlighted the role for vouching in the government’s framework as a positive step and emphasised the government’s recognition of the importance of parity for digital identity verification as one of the most central developments for wider adoption of the technology.
Providing a more cautious view, Lord Tim Clement-Jones said the UK risked creating a byzantine pyramid of governance on digital identity. He pointed to the huge number of bodies envisaged to have roles in the UK system and raised concerns that the UK will end up with a certification scheme that differs from anyone else’s internationally by not using existing standards or accreditation systems.
Looking forward, Julie highlighted that providers are looking for clarity on how to operate and deliver over the next 18 months before any of these documents become legislation. She also expressed the sincere hope that the progress made in terms of offering digital Right to Work checks, alongside physical ones, will continue rather than end in September 2021.
She identified two separate ‘tracks’ for public and private sector use of digital identity and raised the need for a conversation on when and how to join these up with the consumer at the heart. When considering data sources, for example, the ability of digital identity providers to access data across the Passport Office, the DVLA and other government agencies and departments is critical to support the development of digital identity solutions.
The panel was pleased to see the creation of a new government Digital Identity Strategy Board which they hoped would drive progress but raised the need for further transparency about ongoing work in this space, including a list of members, TOR and meeting minutes from these sessions.
Public trust in digital identity
One of the core topics of conversation centred upon trust in digital identity technologies and what steps can be taken to drive wider public trust in this space.
Lord Tim Clement-Jones said that there is a key role for government on standards to ensure digital identity providers are suitable and trustworthy, as well as in providing a workable and feasible proposal that inspires public confidence.
Julie highlighted how, alongside the Post Office, Yoti welcomed the soon to be published research undertaken by OIX into documents and inclusion.
Laura Barrowcliff emphasised the importance of context for public trust, putting the consumer experience at the heart of considerations. Opening up digital identity and consumer choice is one such way of improving the experience for users. Whilst much of the discussion on trust ties in with concerns around fraud, Laura highlighted how digital identity can actually help from a security and privacy perspective by embodying principles such as data minimisation and transparency. She also highlighted how data minimisation and proportionate use of digital identity data could be key for user buy-in.
Lessons from around the world
Looking to international counterparts, the panel drew attention to countries around the world which have made good progress on digital identity and key learnings from these global exemplars.
The progress on digital identity made in Singapore and Canada was mentioned by Julie Dawson, who emphasised the openness around digital identity proposals – which span the public and private sector – and the work being done to keep citizens informed and involve them in the process.
Julie also raised the example of the EU, which is accelerating its work on digital identity with an approach that also spans the public and private sector and is looking at key issues such as data sources whilst focusing on the consumer. Lord Tim Clement-Jones emphasised the importance of monitoring Europe’s progress in this area and the need for the UK government to consider how its own approach will be interoperable internationally.
Panellists discussed the role digital identities have played in Estonia where 99% of citizens hold digital ID and public trust in digital identities is the norm. However, they recognised key differences between the UK and Estonia. In the UK, digital identity solutions are developing in the context of widespread use of physical identification documents, whereas digital identities were the starting point in Estonia.
Beyond the EU, Laura said that GBG has a digital identity solution in Australia where the market for reusable identities is accelerating rapidly. She highlighted that working with private sector companies who have the necessary infrastructure and capabilities in place is critical to drive adoption.
Priorities for digital identity
Drawing the discussion to a close, each of the panellists were asked for their top priority to support public trust and the growth of the digital identity market in the UK.
Transparency was identified as Julie Dawson’s top priority, particularly around what discussions are happening within and across government departments and on the work of the Strategy Board.
Lord Tim-Clement Jones highlighted data and trustworthy data-sharing as key. He said he hopes to see the formation of data foundations and trusts of publicly held information that is properly curated to be used or shared on the basis of set standards and rules, which should spill over into the digital identity arena.
Laura Barrowcliff said simplicity is most important, keeping things simple for those working in the ecosystem as well as for consumers, with those consumers at the heart of all decision-making processes.
Britain should be leading the global conversation on tech
It's been clear during the pandemic that we're increasingly dependent on digital technology and online solutions. The Culture Secretary recently set out 10 tech priorities. Some of these reflected in the Queen's Speech, but how do they measure up and are they the right ones?
First, we need to roll out world-class digital infrastructure nationwide and level up digital prosperity across the UK.
We were originally promised spending of £5bn by 2025 yet only a fraction of this - £1.2 billion - will have been spent by then. Digital exclusion and data poverty has become acute during the pandemic. It's estimated that some 1.8 million children have not had adequate digital access. It's not just about broadband being available, it's about affordability too and that devices are available.
Unlocking the power of data is another priority, as well as championing free and fair digital trade.
We recently had the government’s response to the consultation on the National Data Strategy. There is some understanding of the need to maintain public trust in the sharing and use of their data and a welcome commitment to continue with the work started by the Open Data Institute in creating trustworthy mechanisms such as data institutions and trusts to do so. But recent events involving GP held data demonstrate that we must also ensure public data is valued and used for public benefit and not simply traded away. We should establish a Sovereign Health Data Fund as suggested by Future Care Capital.
"The pace, scale and ambition of government action does not match the upskilling challenge facing many people working in the UK"
We must keep the UK safe and secure online. We need the “secure by design” consumer protection provisions now promised. But the draft Online Safety Bill now published is not yet fit for purpose. The problem is what's excluded. In particular, commercial pornography where there is no user generated content; societal harms caused for instance by fake news/disinformation so clearly described in the Report of Lord Puttnam’s Democracy and Digital Technologies Select Committee; all educational and news platforms.
Additionally, no group actions can be brought. There's no focus on the issues surrounding anonymity/know your user, or any reference to economic harms. Most tellingly, there is no focus on enhanced PHSE or the promised media literacy strategy - both of which must go hand-in-hand with this legislation. There's also little clarity on the issue of algorithmic pushing of content.
It’s vital that we build a tech-savvy nation. This is partly about digital skills for the future and I welcome greater focus on further education in the new Skills and Post-16 Education Bill. But the pace, scale and ambition of government action does not match the upskilling challenge facing many people working in the UK, as Jo Johnson recently said.
The need for a funding system that helps people to reskill is critical. Non-STEM creative courses should be valued. Careers' advice and adult education needs a total revamp. Apprentice levy reform is overdue. The work of Local Digital Skills Partnerships is welcome, but they are massively under-resourced. Broader digital literacy is crucial too, as the AI Council in their AI Roadmap pointed out. As is greater diversity and inclusion in the tech workforce.
We must fuel a new era of start-ups and scaleups and unleash the transformational power of tech and AI.
The government needs to honour their pledge to the Lords' Science and Technology Committee to support catapults to be more effective institutions as a critical part of innovation strategy. I welcome the commitment to produce a National AI Strategy, which we should all contribute to when the consultation takes place later this year.
We should be leading the global conversation on tech, the recent G7 Digital Communique and plans to host the Future Tech Forum, but we need to go beyond principles in establishing international AI governance standards and solutions. G7 agreement on a global minimum corporation tax rate bodes well for OECD digital tax discussions.
At the end of the day there are numerous notable omissions. Where is the commitment to a Bill to set up the new Digital Markets Unit, or tackling the gig economy in the many services run through digital applications? The latter should be a major priority.
Lord C-J: Government must resolve AI ethical issues in the Integrated Review
The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Integrated Review.
The House Live April 2021
The Lords recently debated the government’s Integrated Review set out in “Global Britain in a Competitive Age”. The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Review. So, I hope that the promised AI Strategy this autumn and a Defence AI Strategy this May will flesh these out, resolve some of the contradictions and tackle a number of key issues. Let me mark the government’s card in the meantime.
Commercialisation of our R&D in the UK is key but can be a real weakness. The government need to honour their pledge to the Science and Technology Committee to support Catapults to be more effective institutions as a critical part of Innovation strategy. Access to finance is also crucial. The Kalifa Review of UK Fintech recommends the delivery of a digital finance package that creates a new regulatory framework for emerging technology. What is the government’s response to his creative ideas? The pandemic has highlighted the need for public trust in data use.
"The pandemic has highlighted the need for public trust in data use"
Regarding skills, the nature of work will change radically and there will be a need for different jobs and skills. There is a great deal happening on high-end technical specialist skills. Turing Fellowships, Phd’s, conversion courses, An Office for Talent, a Global Talent Visa etc. As the AI Council Roadmap points out, the government needs to take steps to ensure that the general digital skills and digital literacy of the UK are brought up to speed. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
Building national resilience by adopting a whole-of-society approach to risk assessment is welcome but in this context the government should heed the recent Alan Turing Institute report which emphasizes that access to reliable information particularly online is crucial to the ability of a democracy to coordinate effective collective action. New AI applications such as GPT3 the language generation system, can readily spread and amplify disinformation. How will the Online Safety legislation tackle this?
At the heart of building resilience must lie a comprehensive cyber strategy but the threat in the digital world is far wider than cyber. Hazards and threats can become more likely because of the development of technologies like AI and the transformations it will bring and how technologies interconnect to amplify them.
A core of our resilience is of course defence capability. A new Defence Centre for Artificial Intelligence is now being formed to accelerate adoption and a Defence AI strategy is promised next month. Its importance is reinforced in the Defence Command Paper, but there is a wholly inadequate approach to the control of lethal autonomous weapon systems or LAWS. Whilst there is a NATO definition of “automated” and “autonomous”, the MOD has no operative definition of or LAWS. Given that the most problematic aspect – autonomy – has been defined is an extraordinary state of affairs given that the UK is a founding member of the AI Partnership for Defence, created to “provide values-based global leadership in defence for policies and approaches in adopting AI.”
The Review talks of supporting the effective and ethical adoption of AI and data technologies and identifying international opportunities to collaborate on AI R&D ethics and regulation. At the same time, it talks of the limits of global governance with “competition over the development of rules, norms and standards.” How do the two statements square? We have seen the recent publication of the EU’s proposed Legal Framework for the risk-based regulation of AI. Will the government follow suit?
Regarding data, the government says it wants to see a continuing focus on interoperability and to champion the international flow of data and is setting up a new Central Digital and Data Office. But the pandemic has highlighted the need for public trust in data use. Will the National Data Strategy (NDS) recognize this and take on board the AI Council’s recommendations to build public trust for use of public data, through competition in data access, and responsible and trustworthy data governance frameworks?
Lord Clement-Jones is a Liberal Democrat member of the House of Lords, former Chair of the Lords Select Committee on AI and co-chair the APPG on AI.
We Need a Legal and Ethical Framework for Lethal Autonomous Weapons

As part of a recent Defence Review, our Prime Minister has said that the UK will invest another £1.5 billion in military research and development designed to master the new technologies of warfare and establish a new Defence Centre for AI. The head of the British Army, recently said that he foresees the army of the future as an integration of “boots and bots”.
The Government however have not yet explained how legal and ethical frameworks and support for personnel engaged in operations will also change as a consequence of the use of new technologies, particularly autonomous weapons, which could be deployed by our armed forces or our allies.
The final report of the US National Security Commission on Artificial Intelligence, published this March however considered the use of autonomous weapons systems and risks associated with AI-enabled warfare and concluded that “The U.S. commitment to IHL” - international humanitarian law - “is long-standing, and AI-enabled and autonomous weapon systems will not change this commitment.”
The UN Secretary General, António Guterres goes further and argues: “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. Yet we still have no international limitation agreement.
In company with a former Secretary of State for Defence and a former Chief of Defence Staff I recently argued in Parliament for a review of how legal and ethical frameworks need to be updated in response to novel defence technologies. This is my speech in which I pointed out the slow progress being made by the UK Government in addressing these issues.
In a written response subsequent to the debate, the Minister stated that whilst there is a NATO definition of “automated system” and “autonomous system”, the UK Ministry of Defence has no operative definition of Lethal Autonomous Weapon Systems or "LAWS". Given that the most problematic aspect – autonomy - HAS been defined that is an extraordinary state of affairs.
A few years ago, I chaired the House of Lords Select Committee on AI which considered the economic, ethical and social implication of advances in artificial intelligence. In our Report published in April 2018 entitled ‘AI in the UK: Ready, willing and able’ we addressed the issue of military use of AI and stated that 'perhaps the most emotive and high stakes area of AI development today is its use for military purposes’ recommending that this area merited a ‘full inquiry on its own.’ (para 334)
As the Noble Lord Browne of Ladyton has made plain, regrettably, it seems not to have yet attracted such an inquiry or even any serious examination. I am therefore extremely grateful to the Noble Lord for creating the opportunity to follow up on some of the issues we raised in connection with the deployment of AI and some of the challenges we outlined.
It’s also a privilege to be a co-signatory with the Noble and Gallant Lord HAWTON Houghton too who has thought so carefully about issues involving the human interface with military technology.
The broad context of course, as the Noble Lord Browne has said, are the unknowns and uncertainties in policy, legal and regulatory terms that new technology in military use can generate.
His concerns about complications and the personal liabilities to which it exposes deployed forces are widely shared by those who understand the capabilities of new technology. All the more so in a multinational context where other countries may be using technology which either we would not deploy or the use of which could create potential vulnerabilities for our troops.
Looking back to our Report, one of the things that concerned the Committee more than anything else was the grey area surrounding the definition of lethal autonomous weapon systems or LAWS.
As the Noble Lord Browne has said as the Committee: explored the issue, we discovered that the UK’s then definition which included the phrase “An autonomous system is capable of understanding higher-level intent and direction" was clearly out of step with the definitions used by most other governments and imposed a much higher threshold on what might be considered autonomous.
This allowed the government to say “the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all”
Our committee concluded that, ”In practice, this lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry”.
This was particularly in the light of the fact that at the UN’s Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) in 2017 the UK had opposed the proposed international ban on the development and use of autonomous weapons.
We therefore recommended that the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world.
The Government in their response to the Committee’s Report in June 2018 however replied: The Ministry of Defence “has no plans to change the definition of an autonomous system”.
It did say however: “The UK will continue to actively participate in future GGE meetings, trying to reach agreement at the earliest possible stage.”
Later, thanks to the Liaison Committee we were able - on two occasions last year - to follow up on progress in this area.
On the first occasion, in reply to the Liaison Committee’s letter of last January which asked “What discussions have the Government had with international partners about the definition of an autonomous weapons system, and what representations have they received about the issues presented with their current definition?”
the government replied:
“There is no international agreement on the definition or characteristics of autonomous weapons systems. HMG has received some representations on this subject from Parliamentarians ……” and has discussed it during meetings of the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), an international forum which brings together expertise from states, industry, academia and civil society.
The GGE is yet to achieve consensus on an internationally accepted definition and there is therefore no common standard against which to align. As such, the UK does not intend to change its definition.”
So no change there my lords until later in the year……. December 2020 when the Prime Minister announced the creation of the Autonomy Development Centre to “accelerate the research, development, testing, integration and deployment of world-leading artificial intelligence and autonomous systems”
In the follow up Report “AI in the UK: No Room for Complacency” published in the same month, we concluded: “We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.”
The response to this last month was a complete about turn by the Government. They said:
“We agree that the UK must be able to participate in international debates on autonomous weapons, taking an active role as moral and ethical leader on the global stage, and we further agree the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies.
“In recent years the MOD has subscribed to a number of definitions of autonomous systems, principally to distinguish them from unmanned or automated systems, and not specifically as the foundation for an ethical framework. On this aspect, we are aligned with our key allies.
Most recently, the UK accepted NATO’s latest definitions of “autonomous” and “autonomy”, which are now in working use within the Alliance. The Committee should note that these definitions refer to broad categories of autonomous systems, and not specifically to LAWS. To assist the Committee, we have provided a table setting out UK and some international definitions of key terms.”
The NATO definition sets a much less high bar as to what is considered autonomous: “A system that decides and acts to accomplish desired goals within defined parameters, based on acquired knowledge and an evolving situational awareness, following an optimal but potentially unpredictable course of action.”
The Government went on to say: “The MOD is preparing to publish a new Defence AI Strategy and will continue to review definitions as part of ongoing policy development in this area.”
Now, I apologize for taking my Noble Lords at length through this exchange of recommendation and response but if nothing else it does demonstrate the terrier-like quality of Lords Select Committees in getting responses from government.
This latest response is extremely welcome. But in the context of the Lord Brown’s amendment and the issues we have raised we need to ask a number of questions now: What are the consequences of the MOD’s fresh thinking?
What is the Defence AI Strategy designed to achieve. Does it include the kind of enquiry our Select Committee was asking for?
Now that we subscribe to the common NATO definition of LAWS will the Strategy in fact deal specifically with the liability and international and domestic legal and ethical framework issues which are central to this amendment?
If not my Lords, then a review of the type envisaged by this amendment is essential.
The final report of the US National Security Commission on Artificial Intelligence referred to by the Noble Lord Browne has for example taken a comprehensive approach to the issues involved. The Noble Lord has quoted three very important conclusions and asked whether the government agrees in respect of our own autonomous weapons.
There are three further crucial recommendations made by the Commission:
“The United States must work closely with its allies to develop standards of practice regarding how states should responsibly develop, test, and employ AI-enabled and autonomous weapon systems.”
And “The United States should actively pursue the development of technologies and strategies that could enable effective and secure verification of future arms control agreements involving uses of AI technologies.”
And of particular importance in this context “countries must take actions which focus on reducing risks associated with AI enabled and autonomous weapon systems and encourage safety and compliance with IHL (international Humanitarian Law) when discussing their development, deployment, and use”.
Will the Defence AI Strategy or indeed the Integrated Review undertake as wide an enquiry? Would it come to the same or similar conclusions?
My Lords, the MOD it seems has moved some way to getting to grips with the implications of autonomous weapons in the last three years. If it has not yet considered the issues set out in the amendment, it clearly should, as soon as possible, update the legal frameworks for warfare in the light of new technology, or our service personnel will be at considerable legal risk. I hope it will move further in response to today’s short debate.
COVID-19, Artificial Intelligence and Data Governance: A Conversation with Lord Tim Clement-Jones
BIICL June 2020
https://youtu.be/sABSaAkkyrI
This was the first in a series of webinars on 'Artificial Intelligence: Opportunities, Risks, and the Future of Regulation'.
In light of the COVID-19 outbreak, governments are developing tracing applications and using a multitude of data to mitigate the spread of the virus. But the processing, storing, use of personal data and the public health effectiveness of these applications require public trust and a clear and specific regulatory context.
The technical focus in the debate on the design of the applications - centralised v. decentralised, national v. global, and so on - obfuscates ethical, social, and legal scrutiny, in particular against the emerging context of public-private partnerships. Discussants focused on these issues, considering the application of AI and data governance issues against the context of a pandemic, national responses, and the need for international, cross border collaboration.
Lord Clement-Jones CBE led a conversation with leading figures in this field, including:
Professor Lilian Edwards, Newcastle Law School, the inspiration behind the draft Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates;
Carly Kind, Director of The Ada Lovelace Institute, which published the rapid evidence review paper Exit through the App Store? Should the UK Government use technology to transition from the COVID-19 global public health crisis;
Professor Peter Fussey, Research Director of Advancing human rights in the age of AI and the digital society at Essex University's Human Rights Centre;
Mark Findlay, Director of the Centre for Artificial Intelligence and Data Governance at Singapore Management University, which has recently published a position paper on Ethics, AI, Mass Data and Pandemic Challenges: Responsible Data Use and Infrastructure Application for Surveillance and Pre-emptive Tracing Post-crisis.
The event was convened by Dr Irene Pietropaoli, Research Fellow in Business & Human Rights, British Institute of International and Comparative Law.
Regulating artificial intelligence: Where are we now? Where are we heading?
By Annabel Ashby, Imran Syed & Tim Clement-Jones on March 3, 2021
https://www.technologyslegaledge.com/author/tclementjones/
Hard or soft law?

That the regulation of Artificial intelligence is a hot topic is hardly surprising. AI is being adopted at speed, news reports frequently appear about high-profile AI decision-making, and the sheer volume of guidance and regulatory proposals for interested parties to digest can seem challenging.
Where are we now? What can we expect in terms of future regulation? And what might compliance with “ethical” AI entail?
High-level ethical AI principles were made by the OECD, EU and G20 in 2019. As explained below, great strides were made in 2020 as key bodies worked to capture these principles in proposed new regulation and operational processes. 2021 will undoubtedly keep up this momentum as these initiatives continue their journey into further guidance and some hard law.
In the meantime, with regulation playing catch-up with reality (so often the case where technological innovation is concerned), industry has sought to provide reassurance by developing voluntary codes. While this is helpful and laudable, regulators are taking the view that more consistent, risk-based regulation is preferable to voluntary best practice.
We outline the most significant initiatives below, but first it is worth understanding what regulation might look like for an organisation using AI.
Regulating AI
Of course the devil will be in the detail, but analysis of the most influential papers from around the globe reveals common themes that are the likely pre-cursers of regulation. It reveals that conceptually, the regulation of AI is fairly straightforward and has three key components:
- setting out the standards to be attained;
- creating record keeping obligations; and
- possible certification following audit of those records, which will all be framed by a risk-based approach.
Standards
Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.
Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.
Records
To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.
Certification
A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.
Risk-based approach
Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.
Key initiatives and Proposals
Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:
OECD
The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.
Europe
Several significant proposals were published by key institutions in 2020.
In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.
In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.
Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.
United Kingdom
The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.
Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.
The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.
The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.
Regulators
Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.
The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.
Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.
2021 and beyond
In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.Print:EmailTweetLikeLinkedIn
Tackling the algorithm in the public sector
Constitution Society Blog Lord C-J March 2021
Lord Clement-Jones CBE is the House of Lords Liberal Democrat Spokesperson for Digital and former Chair of the House of Lords Select Committee on Artificial Intelligence (2017-2018).
https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/

Algorithms in the public sector have certainly been much in the news since I raised the subject in a house of Lords debate last February. The use of algorithms in government – and more specifically, algorithmic decision-making – has come under increasing scrutiny.
The debate has become more intense since the UK government’s disastrous attempt to use an algorithm to determine A-level and GCSE grades in lieu of exams, which had been cancelled due to the pandemic. This is what the FT had to say last August after the OFQUAL Exam debacle, where students were subjected to what has been described as unfair and unaccountable decision-making over their A-level grades:
‘The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure….’
It concluded: ‘Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts…. As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes.’
Over the past few years, we have seen a substantial increase in the adoption of algorithmic decision-making and prediction, or ADM, across central and local government. An investigation by the Guardian in late 2019 showed some 140 local authorities out of 408 surveyed, and about a quarter of police authorities, were now using computer algorithms for prediction, risk assessment and assistance in decision-making in areas such as benefit claims, who gets social housing and other issues – despite concerns about their reliability. According to the Guardian, nearly a year later that figure had increased to half of local councils in England, Wales and Scotland; many of them without any public consultation on their use.
Of particular concern are tools such as the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias, discrimination and dubious predictions.
Central government use is even more opaque but we know that HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services.
A key example of ADM use in central government is the DWP’s much criticised Universal Credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group study ‘The Computer Says No’ shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.
The Joint Council for the Welfare of Immigrants (JCWI) and campaigning organisation Foxglove joined forces last year to sue the Home Office over an allegedly discriminatory algorithmic system – the so called ‘streaming tool’ – used to screen migration applications. This is the first, it seems, successful legal challenge to an algorithmic decision system in the UK, although before having to defend the system in court, the Home Office decided to scrap the algorithm.
The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system two years ago and said in a statement afterwards: ‘Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.’
Overseas the use of algorithms is even more extensive and, it should be said, controversial – particularly in the US. One such system is the NYPD’s Patternizr, a tool that the NYPD has designed to identify potential future patterns of criminal activity. Others include Northpointe’s COMPAS risk assessment programme in Florida and the InterRAI care assessment algorithm in Arkansas.
It’s not that we weren’t warned, most notably in Cathy O’Neil’s Weapons of Math Destruction (2016) and Hannah Fry’s Hello World (2018), of the dangers of replication of historical bias in algorithmic decision making.
It is clear that failure to properly regulate these systems risks embedding bias and inaccuracy. Even when not relying on ADM alone, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.
Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report ‘AI in the UK: Ready Willing and Able?’ in 2018. We said at the time: ‘We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.’
It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a ‘right to an explanation’ provision, when an individual has been subject to fully automated decision-making. However, few highly significant decisions are fully automated – often, they are used as decision support, for example in detecting child abuse. The law should be expanded to also cover systems where AI is only part of the final decision.
The Science and Technology Select Committee Report ‘Algorithms in Decision-Making’ of May 2018, made extensive recommendations in this respect. It urged the adoption of a legally enforceable ‘right to explanation’ that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results. It also called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.
Last year, the Committee on Standards in Public Life published a review that looked at the implications of AI for the seven Nolan principles of public life, and examined if government policy is up to the task of upholding standards as AI is rolled out across our public services.
The committee’s Chair, Lord Evans, said on publishing the report:
‘Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector…. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.’
The report found that despite the GDPR, the Data Ethics Framework, the OECD principles, and the Guidelines for Using Artificial Intelligence in the Public Sector; the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be. The Committee’s report presented a number of recommendations to mitigate these risks, including
- greater transparency by public bodies in use of algorithms,
- new guidance to ensure algorithmic decision-making abides by equalities law,
- the creation of a single coherent regulatory framework to govern this area,
- the formation of a body to advise existing regulators on relevant issues,
- and proper routes of redress for citizens who feel decisions are unfair.
In the light of the Committee on Standards in Public Life Report, it is high time that a minister was appointed with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector, as was also recommended by the Commons Science and Technology Committee.
We also need to consider whether – as Big Brother Watch has suggested – we should:
- Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
- Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
- Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns.
This chimes with both the Mind the Gap report from the Institute for the Future of Work, which proposed an Accountability for Algorithms Act, and the Ada Lovelace Institute paper, Can Algorithms Ever Make the Grade? Both reports call additionally for a public register of algorithms, such as have been instituted in Amsterdam and Helsinki, and independent external scrutiny to ensure the efficacy and accuracy of algorithmic systems.
Post COVID, private and public institutions will increasingly adopt algorithmic or automated decision making. These will give rise to complaints requiring specialist skills beyond sectoral or data knowledge. The CDEI in its report, Bias in Algorithmic Decision Making, concluded that algorithmic bias means that the overlap between discrimination law, data protection law and sector regulations is becoming increasingly important and existing regulators need to adapt their enforcement to algorithmic decision-making.
This is especially true of both the existing and proposed public sector ombudsman who are – or will be – tasked with dealing with complaints about algorithmic decision-making. They need to be staffed by specialists who can test algorithms’ compliance with ethically aligned design and operating standards and regulation.
There is no doubt that to avoid unethical algorithmic decision making becoming irretrievably embedded in our public services we need to see this approach taken forward, and the other crucial proposals discussed above enshrined in new legislation.
The Constitution Society is committed to the promotion of informed debate and is politically impartial. Any views expressed in this article are the personal views of the author and not those of The Constitution Society.
Categories: AI, Constitutional standards
Digital Technology, Trust, and Social Impact with David Puttnam
What is the role of government policy in protecting society and democracy from threats arising from misinformation? Two leading experts and members of the UK Parliament, House of Lords, help us understand the report Digital Technology and the Resurrection of Trust.
About the House of Lords report on trust, technology, and democracy
Michael Krigsman: We're discussing the impact of technology on society and democracy with two leading members of the House of Lords. Please welcome Lord Tim Clement-Jones and Lord David Puttnam. David, please tell us about your work in the House of Lords and, very briefly, about the report that you've just released.
Lord David Puttnam: Well, the most recent 18 months of my life were spent doing a report on the impact of digital technology on democracy. In a sense, the clue is in the title because my original intention was to call it The Restoration of Trust because a lot of it was about misinformation and disinformation.
The evidence we took, for just under a year, from all over the world made it evident the situation was much, much worse, I think, than any other committee, any of the 12 of us, had understood. I ended up calling it The Resurrection of Trust and I think that, in a sense, the switch in those words tells you how profound we decided that the issue was.
Then, of course, along comes January the 6th in Washington, and a lot of the things that we had alluded to and things that we regarded as kind of inevitable all, in a sense, came about. We're feeling a little bit smug at the moment, but we kind of called it right at the end of June last year.
Michael Krigsman: Our second guest today is Lord Tim Clement-Jones. This is his third time back on the CXOTalk. Tim, welcome back. It's great to see you again.
Lord Tim Clement-Jones: It's great to be back, Michael. As you know, my interest is very heavily in the area of artificial intelligence, but I have this crossover with David. David was not only on my original committee, but artificial intelligence is right at the heart of these digital platforms.
I speak on digital issues in the House of Lords. They are absolutely crucial. The whole area of online harms (to some quite high degree) is driven by the algorithms at the heart of these digital platforms. I'm sure we're going to unpack that later on today.
David and I do work very closely together in trying to make sure we get the right regulatory solutions within the UK context.
Michael Krigsman: Very briefly, Tim, just tell us (for our U.S. audience) about the House of Lords.
Lord Tim Clement-Jones: It is a revising chamber, but it's also a chamber which has the kind of expertise because it contains people who are maybe at the end of their political careers, if you like, with a small p, but have a big expertise, a great interest in a number of areas that they've worked on for years or all their lives, sometimes. We can draw on real experience and understanding of some of these issues.
We call ourselves a revising chamber but, actually, I think we should really call ourselves an expert chamber because we examine legislation, we look at future regulation much more closely than the House of Commons. I think, in many ways, actually, government does treat us as a resource. They certainly treat our reports with considerable respect.
Key issues covered by the House of Lords report
Michael Krigsman: David, tell us about the core issues that your report covered. Tim, please jump in.
Lord David Puttnam: I think Tim, in a sense, set it up quite nicely. We were looking at the potential danger to democracy—of misinformation, disinformation—and the degree to which the duty of care was being exercised by the major platforms (Facebook, Twitter, et cetera) in understanding what their role was in a new 21st Century democracy, both looking at the positive role they could play in terms of information, generating information and checking information, but also the negative in terms of the amplification of disinformation. That's an issue we looked at very carefully.
This is where Tim and my interests absolutely coincide because within those black boxes, within those algorithmic structures is where the problem lies. The problem century-wise—maybe this will spark people a little, I think—is that these are flawed business models. The business model that drives Facebook, Google, and others is in the advertising-related business model. That requires volume. That requires hits and what their incomes generate on the back of hits.
One of the things we tried to unpick, Michael, which was, I think, pretty important, was we took the vision that it's about reach, not about freedom of speech. We felt that a lot of the freedom of speech advocates misunderstood the problem here. Really, the problem was the amplification of misinformation which in turn benefited or was an enormous boost to the revenues of those platforms. That's the problem.
We are convinced through evidence. We're convinced that they could alter their algorithms, that they can actually dial down and solve many, many of the problems that we perceive. But, actually, it's not in their business interest to. They're trapped, in a sense between demands or requirements of their shareholders to optimize that, to optimize share value, and the role and responsibility they have as massive information platforms within a democracy.
Lord Tim Clement-Jones: Of course, governments have been extremely reluctant, in a sense, to come up against big tech in that sense. We've seen that in the competition area over the advertising monopoly that the big platforms have. But I think many of us are now much more sensitive to this whole aspect of data, behavioral data in particular.
I think Shoshana Zuboff did us all a huge benefit by really getting into detail on what she calls exhaust data, in a sense. It may seem trivial to many of us but, actually, the use to which it's put in terms of targeting messages, targeting advertising, and, in a sense, helping drive those algorithms, I think, is absolutely crucial. We're only just beginning to come to grips with that.
Of course, David and I are both, if you like, tech enthusiasts, but you absolutely have to make sure that we have a handle on this and that we're not giving way to unintended consequences.
Impact of social media platforms on society
Michael Krigsman: What is the deep importance of this set of issues that you spend so much time and energy preparing that report?
Lord David Puttnam: If you value, as certainly I do—and I'm sure we all do value—the sort of democracy we were born and brought up in, for me it's rather like carrying a porcelain bowl across a very slippery floor. We should be looking out for it.
I did a TED Talk in 2012 ... [indiscernible, 00:07:19] entitled The Duty of Care where I made the point that we use the concept of duty of care with many, many things: in the medical sense, in the educational sense. Actually, we haven't applied it to democracy.
Democracy, of all the things that we value, may end up looking like the most fragile. Our tolerance, if you like, of the growth of these major platforms, our encouragement of the reach because of the benefits of information, has kind of blindsided us to what was also happening at the same time.
Someone described the platforms as outrage factories. I'm not sure if anyone has come up with a better description. We've actually actively encouraged outrage instead of intelligent debate.
The whole essence of democracy is compromise. What these platforms do not is encourage intelligent debate and reflect the atmosphere of compromise that any democracy requires in order to be successful.
Lord Tim Clement-Jones: The problem is that the culture has been, to date, against us really having a handle on that. I think it's only now, and I think that it's very interesting to see what the Biden Administration is doing, too, particularly in the competition area.
One of the real barriers, I think, is thinking of these things in only individual harm. I think we're now getting to the point where maybe if somebody is affected by hate speech or racial slurs or whatever as individuals, then I think governments are beginning to accept that that kind of individual harm is something that we need to regulate and make sure that the platforms deal with.
I think that the area that David is raising, which is so important and there is still resistance in governments where it's, if you like, societal harms that are being caused by the platforms. Now, this is difficult to define, but the consequences could be severe if we don't get it right.
I think, across the world, you only have to look at Myanmar, for instance, [indiscernible, 00:09:33]. If that wasn't societal harm in terms of use by the military of Facebook, then I don't know what is. But there are others.
David has used the analogy of January the 6th, for instance. There are analogies and there are examples across the world where democracy is at risk because of the way that these platforms operate.
We have to get to grips with that. It may be hard, but we have to get to grips with it.
Michael Krigsman: How do you get to grips with a topic that, by its nature, is relatively vague and unfocused? Unlike individual harms, when you talk about societal harm, you're talking about very diffuse and broad impacts.
Lord David Puttnam: Michael, I sit on the Labor benches at the House of Lords and probably, unsurprising, I'm a Louey Grandise [phonetic, 00:10:27] fan, so I think the most interesting thing that's taking place at the moment is people who look back to the early part of the 20th Century and the railroads, the breaking up of the railroads, and understanding why that had to happen.
It wasn't just about the railroads. It was about the railroads' ability to block and distort all sorts of other markets. The obvious one was the coal market, but others. Then indeed it blocked and made extraordinary advances on the nature of shipping.
What I think legislators have woken up to is, this isn't just about platforms. This is actually about the way we operate as a society. The influence of these platforms is colossal, but most important of all, the fact that what we have allowed to develop is a business model which acts inexorably against our society's best interest.
That is, it inflames fringe views. It inflames misinformation. Actually, not only inflames it. It then profits from that inflammation. That can't be right.
Lord Tim Clement-Jones: Of course, it is really quite opaque because, if you look at this, the consumer is getting a free ride, aren't they? Because of the advertising, it's being redirected back to them. But it's their data which is part of the whole business model, as David has described.
It's very difficult sometimes for regulators to say, "Ah, this kind of consumer detriment," or whatever it may be. That's why you also need to look at the societal aspects of this.
If you purely look (in conventional terms) at consumer harm, then you'd actually probably miss the issues altogether because—with things like advertising monopoly, use of data without consent, and so on, and misinformation and disinformation—it is quite difficult (without looking at the bigger societal picture) just to pin it down and say, "Ah, well, there's a consumer detriment. We must intervene on competition grounds." That's why, in a sense, we're all now beginning to rewrite the rules so that we do catch these harms.
Balancing social media platforms rights against the “duty of care”
Michael Krigsman: We have a very interesting point from Simone Jo Moore on LinkedIn who is asking, "How do you strike this balance between intelligent questioning and debate versus trolling on social media? How should lawmakers and policymakers deal with this kind of issue?
Lord David Puttnam: We came up with, we identified an interesting area, if you like, of comprise – for want of a better word. As I say, we looked hard at the impact on reach.
Now, Facebook, if you were a reasonably popular person on Facebook, you can quite quickly have 5,000 people follow what you're saying. At that point, you get a tick.
It's clear to us that the algorithm is able to identify you as a super-spreader at that point. What we're saying is, at that moment not only have you got your tick but you then have to validate and verify what it is you're saying.
That state of outrage, if you like, is what blocks the 5,000 and then has to be explained and justified. That seemed to us an interesting area to begin to explore. Is 5,000 the right number? I don't know.
But what was evident to us is the things that Tim really understands extremely well. These algorithmic systems inside that black box can be adjusted to ensure that, at a certain moment, validation takes place. Of course, we saw it happen in your own election that, in the end, warnings were put up.
Now, you have to ask yourself, why wasn't that done much, much, much sooner? Why? Because we only reasonably recently became aware of the depth of the problem.
In a sense, the whole Russian debacle in the U.S. in the 2016 election kind of got us off on the wrong track. We were looking at the wrong place. It wasn't what Russia had done. It was what Russia was able to take advantage of. That should have been the issue and it us a long time to get there.
Lord Tim Clement-Jones: That's why, in a sense, you need new ways of thinking about this. It's the virality of the message, exactly as David has talked about, the super-spreader.
I like the expression used by Avaaz in their report that came out last year looking at, if you like, the anti-vaxx messages and the disinformation over the Internet during the COVID pandemic. They talked about detoxing the algorithm. I think that's really important.
In a sense, I don't think it's possible to lay down absolutely hard and fast rules. That's the benefit of the duty of care that it is a blanket legal concept, which has a code of practice, which is effectively enforced by a regulator. It means that it's up to the platform to get it right in the first place.
Then, of course – David's report talked about it – you need forms of redress. You need a kind of ombudsman, or whatever may be the case, independent of the platforms who can say, "They got it wrong. They allowed these messages to impact on you," and so on and so forth. There are mechanisms that can be adopted, but at the heart of it, as David said, is this black box algorithm that we really need to get to grips with.
Michael Krigsman: You've both used terms that are very interestingly put together, it seems to me. One, Tim, you were just talking about duty of care. David, you've raised (several times) this notion of flawed business models. How do these two, duty of care and the business model, intersect? It seems like they're kind of diametrically opposed.
Lord David Puttnam: It depends on your concept of what society might be, Michael. The type of society I spent my life arguing for, they're not opposed at all, the role of the peace, because that society would have a combination of regulation, but also personal responsibility on the part of the people who run businesses.
One of the things that I think Tim and I are going to be arguing for, which we might have problems in the UK, is the notion of personal responsibility. At what point do the people who sit on the board at Facebook have a personal responsibility for the degree to which they exercise duty of care over the malfunction of their algorithmic systems?
Lord Tim Clement-Jones: I don't see a conflict either, Michael. I think that you may see different regulators involved. You may see, for instance, a regulator imposing a way of working over content, user-generated content on a platform. You may see another regulator (more specialist, for instance) on competition. I think it is going to be horses for courses, but I think that's the important thing to make sure that they cooperate.
I just wanted to say that I do think that often people in this context raised the question of freedom of expression. I suspect that people will come on the chat and want to raise that issue. But again, I don't see a conflict in this area because we're not talking about ordinary discourse. We're talking about extreme messages: anti-vaxxing, incitement of violence, and so on and so forth.
The one thing David and I absolutely don't want to do is to impede freedom of expression. But that's sometimes used certainly by the platforms as a way of resisting regulation, and we have to avoid that.
How to handle the cross-border issues with technology governance?
Michael Krigsman: We have another question coming now from Twitter from Arsalan Khan who raises another dimension. He's talking about if individual countries create their own policies on societal harm, how do you handle the cross-border issues? It seems like that's another really tricky one here.
Lord David Puttnam: I think what is happening, and this is quite determined, I think, on the part of the Biden Administration—the UK and, actually, Europe, the EU, is probably further advanced than anybody else on this—is to align our regulatory frameworks. I think that will happen.
Now, in a sense, these are big marketplaces. The Australian situation with Facebook has stimulated this. Once you get these major markets aligned, it's extremely hard to see how Facebook, Google, and the rest of them could continue with their advertising with their current model. They would have to adjust to what those marketplaces require.
Bear in mind, what troubles me a lot, Michael, is that, if you think back, Mr. Putin and President Xi must be laughing their heads off at the mess we got ourselves into because they've got their own solution to this problem – a lovely, simple solution.
We've got our knickers in a twist in an extraordinary situation quite unintended in most states. The obligation is on the great Western democracies to align the regulatory frameworks and work together. This can't be done on a country-by-country basis.
Lord Tim Clement-Jones: Once the platforms see the writing on the wall, in a sense, Michael, I think they will want to encourage people to do that. As you know, I've been heavily involved in the AI ethics agenda. That is coming together on an international basis. This, if anything, is more immediate and the pressures are much greater. I think it's bound to come together.
It's interesting that we've already had a lot of interest in the duty of care from other countries. The UK, in a sense, is a bit of a frontrunner in this despite the fact that David and I are both rather impatient. We feel that it hasn't moved fast enough.
Nevertheless, even so, by international standards, we are a little bit ahead of the game. There is a lot of interest. I think, once we go forward and we start defining and putting in regulation, that's going to be quite a useful template for people to be able to legislate.
Lord David Puttnam: Michael, it's worth mentioning that it's interesting how things bubble up and then become accepted. When the notion of fines of up to 10% of turnover was first mooted, people said, "What?! What?!"
Now, that's regarded as kind of a standard around which people begin to gather, so there is momentum. Tim is absolutely right. There is momentum here. The momentum is pretty fierce.
Ten percent of turnover is a big fine. If you're sitting on a board, you've got to think several times before you sign up on that. That's not just the cost of doing business.
Michael Krigsman: Is the core issue then the self-interest of platforms versus the public good?
Lord David Puttnam: Yes, essentially it is. Understand, if you look back and look at the big anti-trust decisions that were made in the first decade of the 20th Century. I think we're at a similar moment and, incidentally, I think that it is as certain that these things will be resolved within the next ten years in a very similar manner.
I think it's going to be up to the platforms. Do they want to be broken up? Do they want to be fined? Or do they want to get rejoined in society?
Lord Tim Clement-Jones: Yeah, I mean I could get on and really bore everybody with the different forms of remedies available to our competition regulators. But David talked about big oil, which was broken up by what are called structural remedies.
Now, it may well be that, in the future, regulators—because of the power of the tech platforms—are going to have to think about exactly doing that, say, separating Facebook from YouTube or from Instagram, or things of that sort.
We're not out of the era of "move fast and break things." We now are expecting a level of corporate responsibility from these platforms because of the power they wield. I think we have to think quite big in terms of how we're going to regulate.
Should governments regulate social media?
Michael Krigsman: We have another comment from Twitter, again from Arsalan Khan. He's talking about, do we need a new world order that requires technology platforms to be built in? It seems like as long as you've got this private sector set of incentives versus the public good, then you're going to be at loggerheads. In a practical way, what are the solutions, the remedies, as you were just starting to describe?
Lord Tim Clement-Jones: What are governments for? Arsalan always asks the most wonderful questions, by the way, as he did last time.
What are governments for? That is what the role of government is. It is, in a sense, a brokerage. It's got to understand what is for the benefit of, if you like, society as a whole and, on the other hand, what are the freedoms that absolutely need preserving and guaranteeing and so on.
I would say that we have some really difficult decisions to make in this area. But David and I come from the point of view of actually creating more freedom because the impact of the platforms (in many, many ways) will be to reduce our freedoms if we don't do something about it.
Lord David Puttnam: It's very, very much, and that's why I would argue, Michael, that the Facebook reaction or response in Australia was so incredibly clumsy because what it did is it begged a question we could really have done without, which is, are they more powerful than the sovereign nations?
Now, you can't go there because you get the G7 together or the G20 together, you know, you're not going to get into a situation where any prime minister is going to concede that, actually, "I'm afraid there's nothing we can do about these guys. They're bigger than us. We're just going to have to live with it." That's not going to happen.
Lord Tim Clement-Jones: The only problem there was the subtext. The legislation was prompted by one of the biggest media organizations in the world. In a sense, I felt pretty uncomfortable taking sides there.
Lord David Puttnam: I think it was just an encouragement to create a new series of an already long-running TV series.
Lord Tim Clement-Jones: [Laughter]
Lord David Puttnam: You're absolutely right about that. I had to put that down as an extraordinary irony of history. The truth is you don't take on nations, and many have.
Some of your companies have and genuinely believe that they were bigger. But I would say don't go there. Frankly, if I were a shareholder in Facebook – I'm not – I'd have been very, very, very cross with whoever made that decision. It was stupid.
Michael Krigsman: Where is all of this going?
Lord Tim Clement-Jones: We're still heavily engaged in trying to get the legislation right in the UK. But David and I believe that our role is to kind of keep government honest and on track and, actually, go further than they've pledged because this question of individual harm, remedies for that, and a duty of care in relation to individual harm isn't enough. It's got to go broader into societal harm.
We've got a road to travel. We've got draft legislation coming in very, very soon this spring. We've got then legislation later on in the year, but actually getting it right is going to require a huge amount of concentration.
Also, we're going to have to fight off objections on the basis of freedom of expression and so on and so forth. We are going to have to reroute our determination in principle, basically. I think there's a great deal of support out there, particularly in terms of protection of young people and things of that sort that we're actually determined to see happen.
Political messages and digital literacy
Michael Krigsman: Is there the political will, do you think, to follow through with these kinds of changes you're describing?
Lord David Puttnam: In the interest of a vibrant democracy, when any prime minister or president of any country looks at the options, I think they're facing many alternatives. I can't really imagine Macron, Johnson, or anybody else looking at the options available to them.
They may find those options quite uncomfortable, and the ability in some of these platforms to embarrass politicians is considerable. But when they actually look at the options, I'm not sure they're faced with that many alternatives other than pressing down the vote that Tim just laid out for you.
Lord Tim Clement-Jones: I think the real Achilles heel, though, that David's report pointed out really clearly, and the government failed to answer satisfactorily, was the whole question of electoral regulation, basically. The use of misleading political messaging during elections, the impact of, if you like, opaque political messaging where it's not obvious where it's coming from, those sorts of things.
I think the determination of governments, especially because they are in control and they are benefiting from some of that messaging, there's a great reluctance to take on the platforms in those circumstances. Most platforms are pretty reluctant to take down any form of political advertising or messaging or, in a sense, moderate political content.
That for me is the bit that I think is going to be the agenda that we'll probably be fighting on for the next ten years.
Lord David Puttnam: Michael, it's quite interesting that both of the major parties – not Tim's party, as you behave very well – both of the major parties actually misled us. I wouldn't say lied to us, but they misled us in the evidence they gave about their use of the digital environment during an election, which was really lamentable. We called them out, but the fact that, in both places, they felt that they needed to, as necessary, break the law to give themselves an edge is a very worrying indicator of what we might be up against here.
Lord Tim Clement-Jones: The trouble is, political parties love data because targeted messages, microtargeting as it's called, is very powerful, potentially, and gaining support. It's like a drug. It's very difficult to wean politicians off what they see as a new, exciting tool to gain support.
Michael Krigsman: I work with various software companies, major software companies. Personalization based on data is such a major focus of technology, of every aspect of technology with tentacles to invade our lives. When done well, it's intuitive and it's helpful. But you're talking about the often indistinguishable case where it's done invasively and insinuating itself into the pattern of our lives. How do you even start to grapple with that?
Lord Tim Clement-Jones: It kind of bubbled up in the Cambridge Analytica case where the guy who ran the company was stupid enough to boast about what they were able to do. What it illustrated is that that was the tip of a very, very worrying nightmare for all of us.
No, I mean this is where you come back to individual responsibility. The idea that the people, the management of Facebook, the management of Google are not appalled by that possibility and aren't doing everything they can to prevent is, I think it's what gives everyone at Twitter nightmares.
I don't think they ever intended or wanted to have the power they have in these fringe areas, but they're stuck with them. The answer is, how do we work with governments to make sure they're minimized?
Lord Tim Clement-Jones: This, Michael, brings in one of David and my favorite subjects, which is digital literacy. I'm an avid reader of people who try and buck the trend. I love Jaron Lanier's book Ten Reasons for Deleting your Facebook Account [sic]. I love the book by Carissa Veliz called Privacy is Power.
Basically, that kind of understanding of what you are doing when you sign up to a platform—when you give your data away, when you don't look at the terms and conditions, you tick the boxes, you accept all cookies, all these sorts of things—it's really important that people understand the consequences of that. I think it's only a tiny minority who have this kind of idea they might possibly live off-grid. None of us can really do that, so we have to make sure that when we live with it, we are not giving away our data in those circumstances.
I don't practice what I preach half the time. We're all in a hurry. We all want to have a look at what's on that website. We hit the accept all cookies button or whatever it may be, and we go through. We've got to be more considerate about how we do these things.
Lord David Puttnam: Chapter 7 of our report is all about digital literacy. We went into it in great depth. Again, fairly lamentable failure by most Western democracies to address this.
There are exceptions. Estonia is a terrific exception. Finland is one of the exceptions. They're exceptions because they understand the danger.
Estonia sits right on the edge with its vast neighbor Russia with 20% of its population being Russian. It can't afford misinformation. Misinformation for them is catastrophe. Necessarily, they make sure their young people are really educated in the way in which they receive information, how they check facts.
We are very complacent in the West; I've got to say. I'll say this about the United States. We're unbelievably complacent in those areas and we're going to have to get smart. We've got to make sure that young people get extremely smart about the way they're fed and react and respond to information.
Lord Tim Clement-Jones: Absolutely. Our politics, right across the West, demonstrate that there's an awful lot of misinformation, which is believed – believed as the gospel, effectively.
Balancing freedom of speech on social media and cyberwarfare
Michael Krigsman: We have another question from Twitter. How do you balance social media reach versus genuine freedom of speech?
Lord David Puttnam: I thought I answered it. Obviously, I didn't. It's that you accept the fact that freedom of speech requires that people can say what they want. This goes back to the black boxes. At a certain moment, the box intervenes and says, "Whoa. Just a minute. There is no truth in what you're saying, " or worse on the case of anti-vaxxers. "There is actual harm and damage in what you're saying. We're not going to give you reach."
What you do is you limit reach until the person making those statements can validate them or affirm them or find some other way of, as it were, being allowed to amplify. It's all about amplification. It's trying to stop the amplification of distortion and lies and really quite dangerous stuff like the anti-vaxx.
We've got a perfect trial run, really, with anti-vaxxing. If we can't get this right, we can't get much right.
Lord Tim Clement-Jones: There are so many ways. When people say, "Oh, how do we do this?" you've got sites like Reddit who have a community, different communities. You have rules applying to the communities that have to conform to a particular standard.
Then you've got the Avaaz not only detoxing the algorithm, but the duty of correction. Then you've got great organizations like NewsGuard who basically, in a sense, have a sort of star system, basically, to verify some of the accuracy of news outlets. We do have the tools, but we just have to be a bit determined about how we use them.
Michael Krigsman: We have another question from Twitter that I think addresses or asks about this point, which is, how can governments set effective constraints when partisan politics benefits from misusing digital technologies and even spreading misinformation?
Lord David Puttnam: Tim laid it out for you early on why the House of Lords existed. This is where it actually gets quite interesting.
We, both Tim and I, during our careers—and we both go back, I think, 25 years—had managed to get amendments into legislation against the head. That's to say that didn't suit either the government of the day or even the lead opposition of the day. The independence of the House of Lords is wonderfully, wonderfully valuable. It is expert and it does listen.
Just a tiny example, if someone said to me or David, "Why were you not surprised that your report didn't get more traction?" it's 77,000 words long. Yeah, it's 77,000 words long because it's a bloody complicated subject. We had the time and the luxury to do it properly.
I don't think that will necessarily prove to be a stumbling block. We have enough ... [indiscernible, 00:37:01] embarrassment. The quality of the House of Lords and the ability to generate public opinion, if you like, around good, sane, sensible solutions still do function within a democracy.
But if you go down the road that Tim was just saying, if you allow the platforms to go in the route they appear to have taken, we'll be dealing with autocracy, not democracy. Then you're going to have a set of problems.
Lord Tim Clement-Jones: David is so right. The power of persuasion still survives in the House of Lords. Because the government doesn't have a majority, we can get things done if that power of persuasion is effective. We've done that quite a few times over the last 25 years, as David says.
Ministers know that. They know that if you espouse a particular cause that is clearly sensible, they're going to find that they're pretty sticky wicked or whatever the appropriate baseball analogy would be, Michael, in those circumstances. We have had some notable successes in that respect.
For instance, only a few years ago, we had a new code for age-appropriate design, which means that webpages now need to take account of the age of the individuals actually accessing them. It's now called a Children's Code. It came into effect last year and it's a major addition to our regulation. It was quite heavily resisted by the platforms and others when it came in, but by a single colleague of David and mine (supported by us) she drove it through, greatly to her credit.
Michael Krigsman: We have two questions now, one on LinkedIn and one on Twitter, that relates to the same topic. That is the speed of government, the speed of change and government's ability to keep up. On Twitter, for example, future wars are going to be cyber, and the government is just catching up. The technology is changing so rapidly that it's very difficult for the legal system to track that. How do we manage that aspect?
Lord Tim Clement-Jones: Funny enough, government think that. Their first thought is about cybersecurity. Their first thought is about their cyber, basically, their data.
We've got a new, brand new, national cybersecurity center about a year or two old now. The truth is, particularly in view of Russian activities, we now have quite good cyber controls. I'm not sure that our risk management is fantastic but, operationally, we are pretty good at this.
For instance, things like the solar winds hack of last year have been looked at pretty carefully. We don't know what the outcome is, but it's been looked at pretty carefully by our national cybersecurity center.
Strangely enough, the criticism I have with government is, if only they thought of our data in the way that they thought about their data, we'd all be in a much happier place, quite honestly.
Lord David Puttnam: I think that's true. Michael, I don't know whether this is absolutely true in the U.S. because it's such a vast country, but my experience of legislation is it can be moved very quickly when there's an incident. Now, I'll give you an example.
I was at the Department of Education at the moment where a baby was allowed to die under very unfortunate, catastrophic failure by different systems of the government. The entire department ground to a halt for about two months while this was looked at and whilst the government, whilst the department tried to explain itself and any amount of legislation was brought forward. Governments deal in crises, and this is going to be a series of crises.
The other thing governments don't like is judicial review. I think we're looking at an area here where judicial review—either by the platforms for a government decision or by civil society because of a government decision—is utterly inevitable. I actually think, longer-term, these big issues are going to be decided in the courts.
Advice for policymakers and business people
Michael Krigsman: As we finish up, can I ask you each for advice to several different groups? First is the advice that you have for governments and for policymakers.
Lord Tim Clement-Jones: Look seriously at societal harms. I think the duty of care is not enough just simply to protect individual citizens. It is all about looking at the wider picture because if you don't, then you're going to find it's too late and your own democracy is going to suffer.
I think you're right, Michael, in a sense that some politicians appear to have a conflict of interest on this. If you're in control, you don't think of what it's like to have the opposition or to be in opposition. Nevertheless, that's what they have to think about.
Lord David Puttnam: I was very impressed, indeed, tuning in to some of the judicial subcommittees at the congressional hearings on the platforms. I thought that the chairman ... [indiscernible, 00:42:35] did extremely well.
There is a lot of expertise. You've got more expertise, actually, Michael, in your country than we have in ours. Listen to the experts, understand the ramifications, and, for God's sake, politicians, it's in their interests, all their interests, irrespective of Republicans or Democrats, to get this right because getting it wrong means you are inviting the possibility of a form of government that very, very, very few people in the United States wish to even contemplate.
Michael Krigsman: What about advice to businesspeople, to the platform owners, for example?
Lord David Puttnam: Well, we had an interesting spate, didn't we, where a lot of advertisers started to take issue with Facebook, and that kind of faded away. But I would have thought that, again, it's a question of regulatory oversight and businesses understanding.
How many businesses in the U.S. want to see democracy crumble? I mean I was quite interested immediately after the January 6th thing for where the businesses walked away from, not so much the Republican party, but away from Trump.
I just think we've got to begin to hold up a mirror to ourselves and also look carefully at what the ramifications of getting it wrong are. I don't think there's a single business in the U.S. (or if there are, there are very, very few) who wish to go down that road. They're going to realize that that means they've got to act, not just react.
Lord Tim Clement-Jones: I think this is a board issue. This is the really important factor.
Looking on the other side, not the platform side because I think they are only too well aware of what they need to do, but if I'm on the other side and I'm, if you like, somebody who is using social media, as a board member, you have to understand the technology and you have to take the time to do that.
The advertising industry—really interesting, as David said—they're developing all kinds of new technology solutions like blockchain to actually track where their advertising messages are going. If they're directed in the wrong way, they find out and there's an accountability down the blockchain which is really smart in the true sense of the word.
It's using technology to understand technology, which I think you can't leave it to the chief information officer or the chief technology officer. You as the CEO or the chair, you have to understand it.
Lord David Puttnam: Tim is 100% right. I've sat in a lot of boards in my life. If you really want to grab a board's attention – I'm not saying which part of the body you're going to grab – start looking at the register and then have a conversation about how adequate directors' insurance is. It's a very lively discussion.
Lord Tim Clement-Jones: [Laughter]
Lord David Puttnam: I think this whole issue of personal responsibility, the things that insurance companies will and won't take on in terms of protecting companies and boards, that's where a lot of this could land and very interestingly.
Importance of digital education
Michael Krigsman: Let's finish up by any thoughts on the role of education and advice that you may have for educators in helping prepare our citizens to deal with these issues.
Lord Tim Clement-Jones: Funny enough, I've just developed (with a group of people) a framework for ethical AI for use in education. We're going to be launching that in March.
The equivalent is needed in many ways because of course digital literacy, digital education is incredibly important. Actually, parents and teachers, this isn't just a generation, a younger generational issue. It needs to go all the way through. I think we need to actually be much more proactive about the tools that are out there for parents and others, even main board directors.
You cannot spend enough time talking about the issues. That's why, when David mentioned Cambridge Analytica, suddenly everybody gets interested. But it's a very rare example of suddenly people becoming sensitized to an issue that they previously didn't really think about.
Lord David Puttnam: It's a parallel, really, in the sense of climate change. These are our issues. If we're going to prepare our kids – I've got three grandchildren – if we're going to prepare them properly for the remains of their lives, we have an absolute obligation to explain to them what the challenges their lives will far are, what forms of society they're going to have to rally around, what sort of governance they should reasonably expect, and how they'll participate in all of that.
If they're left in ignorance—be it on climate change or, frankly, on all the issues we've been discussing this evening—we are making them incredibly vulnerable to a form of challenge and a form of life that we've lived very privileged lives. I think that the lives of our grandchildren, unless we get this right for them and help them, will be very diminished.
I use that word a lot recently. They will live diminished lives and they'll blame us, and they'll wonder why it happened.
Michael Krigsman: Certainly, one of the key themes that I've picked up from both of you during this conversation has been this idea of responsibility, individual responsibility for the public welfare.
Lord David Puttnam: Unquestionable. It's summed up in the various duty of care. We have an absolutely overwhelming duty of care for future generations, and it applies as much to the digital environment as it does to climate.
Lord Tim Clement-Jones: Absolutely. In a way, what we're now having to overturn is this whole idea that online was somehow completely different to offline, to the physical world. Well, some of us have been living in the online remote world for the whole of last year, but why should standards be different in that online world? They shouldn't be. We should expect the same standards of behavior and we should expect people to be accountable for that in the same way as they are in the offline world.
Michael Krigsman: Okay. Well, what a very interesting conversation. I would like to express my deep thank you to Lord Tim Clement-Jones and Lord David Puttnam for joining us today.
David, before we go, I just have to ask you. Behind you and around you are a bunch of photographs and awards that seem distant from your role in the House of Lords. Would you tell us a little bit more about your background very quickly?
Lord David Puttnam: Yes. I was a filmmaker for many years. That's an Emmy sitting behind me. The reason the Emmy is sitting there is the shelf isn't deep enough to take it. But I got my Oscar up there. I've got four or five Golden Globes and three or four BAFTAs, David di Donatello, and Palme d'Or from Cannes. I had a very, very happy, wonderfully happy 30 years in the movie industry, and I've had a wonderful 25 years working with Tim in the legislature, so I'm a lucky guy, really.
https://www.cxotalk.com/episode/digital-technology-trust-social-impact