By Annabel Ashby, Imran Syed & Tim Clement-Jones on March 3, 2021

https://www.technologyslegaledge.com/author/tclementjones/

Hard or soft law?

That the regulation of Artificial intelligence is a hot topic is hardly surprising. AI is being adopted at speed, news reports frequently appear about high-profile AI decision-making, and the sheer volume of guidance and regulatory proposals for interested parties to digest can seem challenging.

Where are we now? What can we expect in terms of future regulation? And what might compliance with “ethical” AI entail?

High-level ethical AI principles were made by the OECD, EU and G20 in 2019. As explained below, great strides were made in 2020 as key bodies worked to capture these principles in proposed new regulation and operational processes. 2021 will undoubtedly keep up this momentum as these initiatives continue their journey into further guidance and some hard law.

In the meantime, with regulation playing catch-up with reality (so often the case where technological innovation is concerned), industry has sought to provide reassurance by developing voluntary codes. While this is helpful and laudable, regulators are taking the view that more consistent, risk-based regulation is preferable to voluntary best practice.

We outline the most significant initiatives below, but first it is worth understanding what regulation might look like for an organisation using AI.

Regulating AI

Of course the devil will be in the detail, but analysis of the most influential papers from around the globe reveals common themes that are the likely pre-cursers of regulation. It reveals that conceptually, the regulation of AI is fairly straightforward and has three key components:

  • setting out the standards to be attained;
  • creating record keeping obligations; and
  • possible certification following audit of those records, which will all be framed by a risk-based approach.

Standards

Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.

Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.

Records

To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.

Certification

A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.

Risk-based approach

Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.

Key initiatives and Proposals

Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:

OECD

The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.

Europe

Several significant proposals were published by key institutions in 2020.

In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.

In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.

Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report  identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.

United Kingdom   

The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.

Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.

The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.

The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.

Regulators

Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.

The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.

Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.

2021 and beyond

In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.Print:EmailTweetLikeLinkedIn