This is the talk I gave at the opening of the excellent Portraits of AI Leadership Conference organized by Ramsay Brown of the The AI Responsibility Lab based in LA and Dr Julian Huppert Director of the Intellectul Forum at Jesus College Cambridge
It’s a pleasure to help kick off proceedings today.
Now you may well ask why a lawyer like myself fell among tech experts like yourselves
In 2016 as a digital spokesperson at an industry and Parliament trust breakfast I realized that the level of parliamentary understanding was incredibly low so with Stephen Metcalfe MP then the chair of Science and Technology Select Committee I founded the All Party Parliamentary Group on Artificial Intelligence.The APPG is dedicated to informing parliamentarians about developments and creating a community of interest around future policy regarding AI, its adoption use and regulation.
As a result I was asked to chair the House of Lords Special Enquiry Select Committee on AI with the remit “to consider the economic, ethical and social implications of advances in artificial intelligence. This produced its report “AI in the UK: Ready Willing and Able?” in April 2018. It took a close look at government policy towards AI and its ambitions including those contained in the Hall/ Pesenti Review of October 2017 and those set out by former prime Minister Teresa May in her Davos World Economic Forum Speech including her goal for “ the UK to lead the world in deciding how AI can be deployed in a safe and ethical manner.”
Since then, as well as co-chairing the All Party AI Group, I have maintained a close interest in the development of UK policy in AI, chaired a follow-up to the Select Committee’s report, “AI in the UK: No Room for Complacency”, acted as an adviser to the Council of Europe’s working party on AI (CAHAI) and helped establish the OECD Global Parliamentary Network on AI.
I am now the Science Innovation and Technology Spokesperson for the Liberal Democrats in the House of Lords for my sins.
Accross the world COVID-19 has emphasized and accelerated the dependence of virtually every business and sector on the successful adoption of the latest relevant technologies for their survival. Barely a day goes by without some reference to AI in the news. Both today and yesterday GPT 4 was one of the lead stories.
Artificial Intelligence presents opportunities in a whole variety of sectors. Variously we know what it can do,.
- Detect financial crime/fraud/anti competitive behaviour.
- Deliver personalised Education-of the learning experience
- Energy Conservation
- In Healthcare: Diagnostics. Drug Discovery and distribution, administration too
- Delivery of the UN Sustainable development goals in terms of more productive agriculture, alleviation of hunger and poverty
- Smart or connected cities
- In terms of technology used by regulators or Reg tech
The opportunities for AI are incredibly varied indeed many people find it unhelpful to have such a variety of different types of machine learning labelled AI as it is. But I think we are stuck with it! There are common factors such as deep neural networks and machine learning. Increasingly the benefits are not just seen around not just about increasing efficiency, speed etc in terms of analysis, pattern detection and ability to predict but more about what creatively AI can add to human endeavour
We’ve seen the excitement over ChatGPT from Open AI and other large language models and AI text to image applications such as DALL E and now we have GPT 4. . The combination of these systems will give every appearance of AGI.
The anticipated economic benefits over this decade are significant with estimates predicting that the UK’s GDP will be up to 10% higher in 2030 from the development and adoption of AI
But things can go wrong.This isn’t just any old technology-The degree of autonomy, lack of human intervention, the Black box nature of some systems makes it different from other tech.
This is well illustrated by Brian Christian’s book ; the Alignment Problem and Stuart Russell’s Human Compatible. The challenge is to ensure that AI is our servant not our master. Stuart Russell says we have to build in uncertainty into the delivery of objectives of AI systems so that a human in loop is not just desirable but necessary.
Furthermore failure to tackle issues such as bias/discrimination and lack of transparency could lead to a lack of public/consumer trust, reputational damage and inability to deploy new technology. Public trust and trustworthy AI is fundamental to continued advances in technology.
Just take for instance
- Consumer Financial Services decisions such as on Credit rating
- Cybersecurity issues
- Deployment in the workplace
This is particularly true in government and public sector use of AI.
- Public sector decisions such as on social security matters
- Live Facial recognition by the police-The dangers of the surveillance state
- And of course deployment of Lethal AutonomousWeapons
The need to ensure responsible or ethical AI in its business and public adoption could and should however lead to a positive appraisal of governance more broadly both in the private and public sector
It is clear that AI even in its narrow form will and should have a profound impact on and implications for corporate governance. Trade organisations such as techUK and specific AI organisations such as the Partnership on AI recognize that corporate responsibility and governance on AI is increasingly important.
This means a much more value driven approach to the adoption of new technology. Engagement from boards through governance right through to policy implementation is crucial. This not purely a matter for the CTO/CIO.
Key areas that need tackling
- Raising senior management awareness of issues posed by AI
- Definition/classification of AI systems being developed, procured and deployed,
- Employment issues : will it augment human skills or substitute them?
- Oversight including Accountability through Boards and Audit and Risk Committees,
- Risk assessment that is undertaken with the identification of high risk uses
- Procurement rules
- Whistleblowing
But it also importantly means assessing the ethics of adoption of AI and the ethical principles to be applied: It may involve the establishment of an ethics advisory committee.
We have a pretty good common set of principles -OECD or G20- which are generally regarded as the gold standard which can be adopted which can help us ensure
- Quality of training data
- Freedom from Bias
- The impact on Individual civil and human rights
- Accuracy and robustness
- Transparency and Explainability which of course include the need for open communication where these technologies are deployed.
Generally in business and in the tech research and development world I think there there is an appetite for regulatory certainty and adoption of common standards particularly on standards for tools such as
- Conformity/risk and impact assessment
- AI audit
- Continuous Monitoring
- Scoreboxes
- And Sandboxing
I am optimistic too that common standards
can be achieved internationally in all these areas. Work on common standards is bearing fruit. In particular We have seen the launch last October of the interactive AI Standards Hub by the Alan Turing institute with the support of the British Standards Institution and National Physical Laboratory which will provide users across industry, academia and regulators with practical tools and educational materials to effectively use and shape AI technical standards.
This in turn could lead to agreement on ISO standards with the EU and the US where NIST is actively engaged in developing such standards
Agreement on the actual regulation of AI ie what elements of governance and application of standards is obligatory, however, is more difficult.
There are already some elements of a legal framework in place. Even without specific legislation, AI deployment in the UK will interface with existing legislation and regulation in particular relating to
- personal data under the GDPR
- discrimination and fair treatment under the Human Rights Act and Equality Act
- product safety and public safety
- And various sector-specific regulatory regimes requiring oversight and control by persons undertaking regulated functions, the FCA for financial services, Ofcom in the future for social media for example.
But when it comes to legislation and regulation that is specific to AI that’s where some of the difficulties and disagreements start emerging especially from the UK’s divergent approach.
The UK has stated that it wishes its regulation to be innovation friendly and context specific. We do need however to be clear that regulation is not necessarily the enemy of innovation, it can in fact be the stimulus and be the key to gaining and retaining public trust around digital technology and its adoption so we can realise the benefits and minimise the returns.
Then we have the policy that regulation will be context specific. As regards categorising AI rather than working to a broad definition of AI and determining what falls within scope, which is the approach taken by the EU AI Act, the UK looks like electing to follow an approach that instead sets out the core principles of AI which the government says “allows regulators to develop their own sector-specific definitions to meet the evolving nature of AI as technology advances.”
This approach which potentially adopts different regulatory requirements across sectors in my view runs the risk of creating barriers for developers and adopters having to navigate through the regulators of multiple sectors even given the new levels of cooperation currently being put in place. Where a cross-compatible AI system is concerned for example in finance and telecoms for example they would have to potentially understand and comply with different regimes administered by the FCA, Prudential Reg Authority, and Ofcom at the same time.
In its AI policy paper published last July there is a surprising admission by the government that a context-driven approach may lead to less uniformity between regulators and may cause confusion and apprehension for stakeholders who will potentially need to consider a regime of multiple regulators as well as the measures required to be taken to deal with extra- territorial regimes, such as the EU Regulation.
Also the the more we diverge when it comes to regulation from other jurisdictions the more difficult it gets for developers and those who want to develop AI systems internationally
One example is the proposals to water down data protection under the GDPR which could mean difficulty in transferring data between the UK and Europe. The more I look at the new Data Protection and Digital Identity bill introduced into Parliament last week the more problematic it appears.
In my view without a broad definition and some overarching duty to carry out a risk and impact assessment and subsequent regular audit to assess whether an AI system is conforming to Al principles the governance of AI systems will be deficient, on the grounds alone that not every sector is regulated.
For example, except for certain specific products such as driverless cars or say in financial services and as proposed for social media platforms there is no accountability or liability regime established for the operation of AI systems more broadly..
Regulation could and should take the form of an overarching regulatory regime designed to ensure public transparency in the use of AI technologies and the recourse available across sectors for non ethical use.This should set out clear common duties to assess risk and impact and adhere to common standards. Depending on the extent of the risk and impact assessed further regulatory requirements would arise.
This includes the public sector. Although The UK Government has recognized the need for guidance for public sector organizations in the procurement and use of AI there is no central and local government compliance mechanism and no transparency yet in the form of a public register of use of automated decision making. It is is interesting that many US cities-and indeed big tech companies- have been much more proactive
Also, despite the efforts of Parliamentarians and organisations such as the Ada Lovelace Institute, there is no no recognition at all by Government that explicit legislation and/‘or regulation for intrusive AI technology such as live facial recognition is needed to prevent the arrival of the surveillance state
But International harmonization is in my view essential if we are to see developers able to commercialize their products on a global basis assured that they are adhering to common standards of regulation and I believe would help provide the certainty businesses would need to develop and invest in the UK more readily
I would go further when it comes to dealing with our nearest trading partner. When the White Paper does emerge I believe that it is important that there is recognition that we need a considerable degree of convergence between ourselves and the EU and that a risk based form of horizontal rather than purely sectoral regulation is required otherwise we face potentially another trade barrier -AI Adequacy -to add to the need for data adequacy.
That in my view is the way to get real traction to realise the full benefits of the global development of responsible AI, AI for good which we all to see flourish !
10th December 2022
Tackling the Harms in the Metaverse
27th November 2022
AI Governance: Science and Technology Committee launches enquiry
10th September 2021
Opening the new AI and Innovation Centre at Buckingham University
6th April 2021