Catalizing Cooperation:Working Together Accross AI Governance Initiatives

Here is the text and video of what I said at this stimulating and useful event hosted by the International Congress for the Goverance of AI

 

https://www.youtube.com/watch?v=z_uji0LolLA

 

 

It is now my pleasure to introduce Lord Clement-Jones, also in a video presentation. He is the former chair of the House of Lords Select Committee on AI. He is the co-chair of the All-Party Parliamentary Group on AI, and he is a founding member of the OECD Parliamentary Group on AI and a member of the Council of Europe's ad hoc Committee on AI (CAHAI).

LORD TIM CLEMENT-JONES: Hello. It is great to be with you.

Today I am going to try to answer questions such as: What kind of international AI governance is needed? Can we build on existing mechanisms? Or does some new body need to be created?

As the House of Lords in our follow-up report, "AI in the UK: No Room for Complacency," last December strongly emphasized, it has never been clearer, particularly after this year of COVID-19 and our ever-greater reliance on digital technology, that we need to retain public trust in the adoption of AI, particularly in its more intrusive forms, and that this is a shared issue internationally. To do that, we need, whilst realizing the opportunities, to mitigate the risks involved in the application of AI, and this brings with it the need for clear standards of accountability.

The year 2019 was the year of the formulation of high-level ethical principles in the field of AI by the OECD, the European Union, and the G20. These are very comprehensive and provide the basis for a common set of international standards. For instance, they all include the need for explainability of decisions and an ability to challenge them, a process made more complex when decisions are made in the so-called "black box" of neural networks.

But it has become clear that voluntary ethical guidelines, however much they are widely shared, are not enough to guarantee ethical AI, and there comes a point where the risks attendant on noncompliance with ethical principles is so high that policymakers need to accept that certain forms of AI development and adoption require enhanced governance and/or regulation.

The key factor in 2020 has been the work done at international level in the Council of Europe, OECD, and the European Union towards putting these principles into practice in an approach to regulation which differentiates between different levels of risk and takes this into account when regulatory measures are formulated.

Last spring the European Commission published its white paper on the proposed regulation of AI by a principle-based legal framework targeting high-risk AI systems. As the white paper says, a risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question or not of whether they are high-risk. The determination of what is a high-risk AI application should be clear, easily understandable, and applicable for all parties concerned.

In the autumn the European Parliament adopted its framework for ethical AI to be applicable to AI, robotics, and related technologies developed, deployed, and/or used within the European Union. Like the Commission's white paper, this proposal also targets high-risk AI. As well as the social and environmental aspects notable in this proposed ethical framework is the emphasis on human oversight required to achieve certification.

Looking through the lens of human rights, including democracy and the rule of law, the CAHAI last December drew up a feasibility study for regulation of AI, which likewise advocates a risk-based approach to regulation. It considers the feasibility of a legal framework for AI and how that might best be achieved. As the study says, these risks, however, depend on the application, context, technology, and stakeholders involved. To counter any stifling of socially beneficial AI innovation and to ensure that the benefits of this technology can be reaped fully while adequately tackling its risks, the CAHAI recommends that a future Council of Europe legal framework on AI should pursue a risk-based approach targeting the specific application context, and work is now ongoing to draft binding and non-binding instruments to take the study forward.

If, however, we aspire to a risk-based regulatory and governance approach, we need to be able to calibrate the risks, which will determine what level of governance we need to go to. But, as has been well illustrated during the COVID-19 pandemic, the language of risk is fraught with misunderstanding. When it comes to AI technologies we need to assess the risks by reference to the nature of AI applications and the context of their use. The potential impact and probability of harm, the importance and sensitivity of use of data, the application within a particular sector, the affected stakeholders, the risks of non-compliance, and whether a human in the loop mitigates risk to any degree.

In this respect, the detailed and authoritative classification work carried out by another international initiative, the OECD Network of Experts on AI working group, so-called "ONE AI," on the classification of AI systems comes at a crucial and timely point. This gives policymakers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context, i.e., sector, stakeholder, purpose, etc.; data and input; AI model, i.e., neural or linear, supervised or unsupervised; and tasks and output, i.e., what does the AI do? It ties in well with the Council of Europe feasibility work.

When it comes to AI technologies we need to assess the risks by reference to the nature of the AI applications and their use, and this kind of calibration, a clear governance hierarchy, can be followed depending on the level of risk assessed. Where the risk is relatively low, a flexible approach such as a voluntary ethical code without a hard compliance mechanism, can be envisaged, such as those enshrined in the international ethical codes mentioned earlier.

Where the risk is a step higher, enhanced corporate governance using business guidelines and standards with clear disclosure and compliance mechanisms needs to be instituted. Already at international level we have guidelines on government best practice, such as the AI procurement guidelines developed by the World Economic Forum, and these have been adopted by the UK government. Finally we may need to introduce comprehensive regulation, such as that which is being adopted for autonomous vehicles, which is enforceable by law.

Given the way the work of all of these organizations is converging, the key question of course is whether on the basis of this kind of commonly held ethical evaluation and risk classification and assessment there are early candidates for regulation and to what extent this can or should be internationally driven. Concern about the use of live facial recognition technologies is becoming widespread with many U.S. cities banning its use and proposals for its regulation under discussion in the European Union and the United Kingdom.

Of concern too are technologies involving deep fakes and algorithmic decision making in sensitive areas, such as criminal justice and financial services. The debate over hard and soft law in this area is by no means concluded, but there is no doubt that pooling expertise at international level could bear fruit. A common international framework informed by the work so far of the high-level panel on digital cooperation, the UN Human Rights Council, and their AI for Good platform, and brokered by UNESCO, where an expert group has been working on a recommendation on the ethics of artificial intelligence. The ITU or the United Nations itself, which in 2019 established a Centre for Artificial Intelligence and Robotics in the Netherlands, could be created, and this could gain public trust for establishing that adopters are accountable for high-risk AI applications and at the same time allay concerns that AI and other digital technologies are being over-regulated.

Given that our aim internationally on AI governance must be to ensure that the cardinal principle is observed that AI needs to be our servant and not our master, there is cause for optimism that experts, policymakers, and regulators now recognize that they have a duty to ensure that whatever solution they adopt they recognize ascending degrees of AI risk and that policies and solutions are classified and calibrated accordingly.

Regulators themselves are now becoming more of a focus. Our House of Lords report recommended regulator training in AI ethics and risk assessment, and I believe that this will become the norm. But even if at this juncture we cannot yet identify a single body to take the work forward, there is clearly a growing common international AI agenda, and—especially I hope with the Biden administration coming much more into the action—we can all expect further progress in 2021.

Thank you.


Lord C-J: ‘Byzantine’ to ‘inclusive’: status update on UK digital ID

From Biometricupdate.com

https://www.biometricupdate.com/202108/byzantine-to-inclusive-status-update-on-uk-digital-id

The good, the bad and the puzzling elements of the UK’s digital ID project and landscape were discussed by a group of stakeholders who found the situation frustrating at present, but believe recent developments offer hope for a ‘healthy ecosystem’ of private digital identity providers and parity between physical and digital credentials. Speakers also compared UK proposals with schemes emerging elsewhere, praising the EU digital wallet approach.

The panel was convened by techUK, a trade association focusing on the potential of digital technologies, against a backdrop of recent announcements by the UK’s Department for Digital, Culture, Media and Sport such as the ‘Digital identity and attributes consultation’ into the ongoing framework underpinning the move to digital, along with the slow-moving legislation on the digital economy.

“We seem to be devising some Byzantine pyramid of governance,” said Lord Tim Clement-Jones, House of Lords Spokesperson for Digital for the Liberal Democrats of the overall UK plan for digital ID on the multiple oversight and auditing bodies proposed. And that looking at the ‘Digital identity and attributes’ documentation “will blow your mind,” such is his feeling of frustration around the topic. He believes legislation on the Digital Economy Act 2017 Part 3 should have been passed long ago allowing providers such as Yoti to bring age verification solutions to the market.

Fellow panellist Julie Dawson, Director of Regulatory and Policy at Yoti was more optimistic about the current state of affairs. She noted the fact that 3.5 million people had used the UK’s EU Exit: ID Document Check app which included biometric verification to be highly encouraging as are the Home Office sandbox trials for digital age verification. However, the lack of a solid digital ID could put British people to a disadvantage, even in the UK, if they cannot verify themselves online such as in the hiring process. Yet people performing manual identity checks are expected to verify a driving license from another country, that they have never seen before, and make a decision on it – something she finds “theatrical.”

The panel, which also featured Laura Barrowcliff, Head of Strategy at digital identity provider GBG Plc was heavily skewed towards the private sector, including the chair, Margaret Moore, Director of Citizen & Devolved Government Services at French firm Sopra Steria which has recently been awarded a contract within France’s digital ID system. They agreed that the UK needs and is developing a healthy ecosystem of digital identity providers, that the ‘consumer’ should be at the heart of the system, that the private sector is an inherently necessary part of the future digital ID landscape.

The government’s role is to establish trust by setting the standards private firms must adhere to, believes Lord Clement-Jones and that it “should be opening up government services to third party digital ID”. He is opposed to the notion of a government-run digital ID system based on the outcome of the UK’s ineffective Verify scheme.

Lord Clement-Jones considers the current flow of evidence-gathering and consultations in the UK to be a “slow waltz”, particularly in light of the recent EU proposals for a digital wallet which is “exactly what is needed” as it is “leaving it to the digital marketplace”. He believes the lack of a “solid proposal” so far by the UK government is hampering the establishment of trust.

“The real thing we have to avoid is for social media to be the arbiters of digital ID. This is why we have to move fast,” said Clement-Jones, “I do not want to be defined by Google or Facebook or Instagram in terms of who I am. Let alone TikTok.” Which is why the UK needs digital commercial providers, noted the member of the House of Lords.

Yoti’s Julie Dawson believes the EU proposals could even see the bloc leapfrogging other jurisdictions with the provision for spanning the public and private sectors. The inclusion of ‘vouching’ in the UK system, where somebody without formal identity could turn to a known registered professional to vouch for them and allow them to register some form of digital ID was found to be highly encouraging. This could make the UK system more inclusive.

Data minimization should be a key part of the UK plan, where only the necessary attribute of somebody’s ID is checked, such as whether they are over 18, compared to handing over a passport or sending a scan which contains multiple other attributes which are not necessary for the seller to see. GBG’s Laura Barrowcliff said this is a highly significant benefit of digital ID and one which, if communicated to the public, could increase support for and trust in digital ID. Any reduction in fraud associated with the use of digital ID could also help sway public opinion, though multiple panellists noted that there will always be elements of identity fraud.

Yoti’s Dawson raised a concern that the current 18-month wait until any legislation comes from the framework and consultations could become lost time to developers and hopes they continue to enhance their offerings. She also called for further transparency in the discussions happening in government departments.

Lord Clement-Jones hopes for the formation of data foundations to manage publicly-held information so the public knows where data is held and how. GBG’s Laura Barrowcliff simply called for simplicity in the ongoing development of the digital ID landscape to keep consumers at the heart so that they can understand the changes and potential and buy into the scheme as their trust grows.

 


Digital ID: What’s the current state-of-play in the UK?

Here is a summary report by techUK of a really useful session discussing the Government's plans for digital ID in the UK.

On 22 July, as part of the #DigitalID2021 event series, techUK hosted an insightful discussion exploring the current state-of-play for digital identity in the UK and how to build public trust in digital identity technologies. The panel also examined how the UK’s progress on digital ID compares with international counterparts and set out their top priorities to support the digital identity market and facilitate wider adoption.

The panel included:

You can watch the full webinar here or read our summary of the key insights below:


The UK’s progress on digital identity

Opening the session, the panel discussed progress around digital identity since the start of the pandemic.

Julie Dawson raised a number of developments that indicate steps in the right direction. Before the pandemic over 3.5m EU citizens proved their settled status via the EU Settlement Scheme, whilst the JMLSG and Land Registry have both since explicitly recognised digital identity, with digital right to work checks and a Home Office sandbox on age verification technologies in alcohol sales also introduced since March last year. She also lauded the creation of the Digital Regulation Cooperation Forum as a great example of joining up across government departments, such as on the topic of age assurance.

Lord Tim Clement-Jones on the other hand noted that the pace of change has remained slow. He said that the UK government needs to take concrete action and should focus on opening up government data to third party providers. He also made the point that the u-turn on the Digital Economy Act Part 3 has not as yet been rectified and so the manifesto pledge to protect children online has still to be fulfilled. Julie pointed out that legislative change in terms of the Mandatory Licensing Conditions are still needed, to enable a person to prove their age to purchase alcohol without solely requiring a physical document with a physical hologram.

Collaboration across industry around digital identities was also highlighted by Julie, drawing upon the example of the Good Health Pass Collaborative which has emerged since the start of the pandemic. The Collaborative has brought together a variety of stakeholders and over 130 companies to work on an interoperable digital identity solution to facilitate international travel post-COVID to operate at scale once more.

 

Examining the Government alpha Trust Framework and latest consultation

Moving on to look at the government’s alpha Trust Framework for digital identity, as well as the newly published consultation on digital identity and attributes, the panel explored what these documents do well and what gaps ultimately remain.

Julie Dawson and Laura Barrowcliff both saw a lot of good in the new proposals, with Laura highlighting how the priorities in the government’s approach around governance, inclusion and interoperability broadly hit on the right points. Julie also highlighted the role for vouching in the government’s framework as a positive step and emphasised the government’s recognition of the importance of parity for digital identity verification as one of the most central developments for wider adoption of the technology.

Providing a more cautious view, Lord Tim Clement-Jones said the UK risked creating a byzantine pyramid of governance on digital identity. He pointed to the huge number of bodies envisaged to have roles in the UK system and raised concerns that the UK will end up with a certification scheme that differs from anyone else’s internationally by not using existing standards or accreditation systems.

Looking forward, Julie highlighted that providers are looking for clarity on how to operate and deliver over the next 18 months before any of these documents become legislation. She also expressed the sincere hope that the progress made in terms of offering digital Right to Work checks, alongside physical ones, will continue rather than end in September 2021.

She identified two separate ‘tracks’ for public and private sector use of digital identity and raised the need for a conversation on when and how to join these up with the consumer at the heart. When considering data sources, for example, the ability of digital identity providers to access data across the Passport Office, the DVLA and other government agencies and departments is critical to support the development of digital identity solutions.

The panel was pleased to see the creation of a new government Digital Identity Strategy Board which they hoped would drive progress but raised the need for further transparency about ongoing work in this space, including a list of members, TOR and meeting minutes from these sessions.

 

Public trust in digital identity

One of the core topics of conversation centred upon trust in digital identity technologies and what steps can be taken to drive wider public trust in this space.

Lord Tim Clement-Jones said that there is a key role for government on standards to ensure digital identity providers are suitable and trustworthy, as well as in providing a workable and feasible proposal that inspires public confidence.

Julie highlighted how, alongside the Post Office, Yoti welcomed the soon to be published research undertaken by OIX into documents and inclusion.

Laura Barrowcliff emphasised the importance of context for public trust, putting the consumer experience at the heart of considerations. Opening up digital identity and consumer choice is one such way of improving the experience for users. Whilst much of the discussion on trust ties in with concerns around fraud, Laura highlighted how digital identity can actually help from a security and privacy perspective by embodying principles such as data minimisation and transparency. She also highlighted how data minimisation and proportionate use of digital identity data could be key for user buy-in.

 

Lessons from around the world

Looking to international counterparts, the panel drew attention to countries around the world which have made good progress on digital identity and key learnings from these global exemplars.

The progress on digital identity made in Singapore and Canada was mentioned by Julie Dawson, who emphasised the openness around digital identity proposals – which span the public and private sector – and the work being done to keep citizens informed and involve them in the process.

Julie also raised the example of the EU, which is accelerating its work on digital identity with an approach that also spans the public and private sector and is looking at key issues such as data sources whilst focusing on the consumer. Lord Tim Clement-Jones emphasised the importance of monitoring Europe’s progress in this area and the need for the UK government to consider how its own approach will be interoperable internationally.

Panellists discussed the role digital identities have played in Estonia where 99% of citizens hold digital ID and public trust in digital identities is the norm. However, they recognised key differences between the UK and Estonia. In the UK, digital identity solutions are developing in the context of widespread use of physical identification documents, whereas digital identities were the starting point in Estonia.

Beyond the EU, Laura said that GBG has a digital identity solution in Australia where the market for reusable identities is accelerating rapidly. She highlighted that working with private sector companies who have the necessary infrastructure and capabilities in place is critical to drive adoption.

 

Priorities for digital identity

Drawing the discussion to a close, each of the panellists were asked for their top priority to support public trust and the growth of the digital identity market in the UK.

Transparency was identified as Julie Dawson’s top priority, particularly around what discussions are happening within and across government departments and on the work of the Strategy Board.

Lord Tim-Clement Jones highlighted data and trustworthy data-sharing as key. He said he hopes to see the formation of data foundations and trusts of publicly held information that is properly curated to be used or shared on the basis of set standards and rules, which should spill over into the digital identity arena.

Laura Barrowcliff said simplicity is most important, keeping things simple for those working in the ecosystem as well as for consumers, with those consumers at the heart of all decision-making processes.


In the new era of geopolitical competition and economic rivalry, what strategies should China and the UK adopt to forge a more constructive relationship?

 

From Kalavinka Viewpoint #8

Kalavinka Viewpoints #8

The prevailing mood now in Europe is to view China through a security and human rights lens rather than the trade and investment approach of the past 20 years. This has been heavily influenced by the policy of successive US administrations. People make a big mistake thinking geopolitical American policy always changes with a new administration. Not having Trump tweeting at 6am is a relief, but Joe Biden is going to be as hardline over security issues and relations with China as his predecessor. For better or worse in the UK, and to a lesser extent across the EU, having diverged for a decade, driven by the prospect of more limited access to intelligence ties, we have now decided to align ourselves more closely with US policy towards China. The UK’s recent National Security and Investment Act which identifies 17 sensitive sectors, including AI and quantum computing technologies where government can block investment transactions is a close imitation of CFIUS. So, for UK corporate investors in particular there is a new tension between investment and national security. With the new legislation and dynamics around trade, businesses will have to be politically advertent. They will have to look at whether the sector they seek investment in or to invest in in partnership with overseas investors is potentially sensitive.

Globally repatriation of supply chains will become an issue. These things ebb and flow. Over the 20th century, they expanded, shrank and expanded again. But, especially as a result of Brexit, the pandemic and people’s understanding of how the vaccinations were manufactured – and as a result of our new, much poorer relationship with China – repatriation is going to be an imperative. Going forward the best way of engaging with China and Chinese investment will be to avoid sourcing from sensitive provinces, not dealing with issues that could give rise to the sort of national infrastructure security concerns that Huawei did, and engaging positively over the essential global areas for cooperation such as the UN sustainable development goals and climate change. If we don’t, we won’t see net zero by 2050. China isn’t going to disappear as an important economic powerhouse and trading and investment partner. But we need to pick and choose where we trade and cooperate. And in this climate that will require good navigation skills.


Britain should be leading the global conversation on tech

 

It's been clear during the pandemic that we're increasingly dependent on digital technology and online solutions. The Culture Secretary recently set out 10 tech priorities. Some of these reflected in the Queen's Speech, but how do they measure up and are they the right ones?

First, we need to roll out world-class digital infrastructure nationwide and level up digital prosperity across the UK.

We were originally promised spending of £5bn by 2025 yet only a fraction of this - £1.2 billion - will have been spent by then. Digital exclusion and data poverty has become acute during the pandemic. It's estimated that some 1.8 million children have not had adequate digital access. It's not just about broadband being available, it's about affordability too and that devices are available.

Unlocking the power of data is another priority, as well as championing free and fair digital trade.

We recently had the government’s response to the consultation on the National Data Strategy. There is some understanding of the need to maintain public trust in the sharing and use of their data and a welcome commitment to continue with the work started by the Open Data Institute in creating trustworthy mechanisms such as data institutions and trusts to do so. But recent events involving GP held data demonstrate that we must also ensure public data is valued and used for public benefit and not simply traded away. We should establish a Sovereign Health Data Fund as suggested by Future Care Capital.

"The pace, scale and ambition of government action does not match the upskilling challenge facing many people working in the UK"

We must keep the UK safe and secure online.  We need the “secure by design” consumer protection provisions now promised. But the draft Online Safety Bill now published is not yet fit for purpose. The problem is what's excluded. In particular, commercial pornography where there is no user generated content; societal harms caused for instance by fake news/disinformation so clearly described in the Report of Lord Puttnam’s Democracy and Digital Technologies Select Committee; all educational and news platforms.

Additionally, no group actions can be brought. There's no focus on the issues surrounding anonymity/know your user, or any reference to economic harms. Most tellingly, there is no focus on enhanced PHSE or the promised media literacy strategy - both of which must go hand-in-hand with this legislation. There's also little clarity on the issue of algorithmic pushing of content.

It’s vital that we build a tech-savvy nation. This is partly about digital skills for the future and I welcome greater focus on further education in the new Skills and Post-16 Education Bill. But the pace, scale and ambition of government action does not match the upskilling challenge facing many people working in the UK, as Jo Johnson recently said.

The need for a funding system that helps people to reskill is critical. Non-STEM creative courses should be valued. Careers' advice and adult education needs a total revamp. Apprentice levy reform is overdue. The work of Local Digital Skills Partnerships is welcome, but they are massively under-resourced. Broader digital literacy is crucial too, as the AI Council in their AI Roadmap pointed out.  As is greater diversity and inclusion in the tech workforce.

We must fuel a new era of start-ups and scaleups and unleash the transformational power of tech and AI.

The government needs to honour their pledge to the Lords' Science and Technology Committee to support catapults to be more effective institutions as a critical part of innovation strategy. I welcome the commitment to produce a National AI Strategy, which we should all contribute to when the consultation takes place later this year.

We should be leading the global conversation on tech, the recent G7 Digital Communique and plans to host the Future Tech Forum, but we need to go beyond principles in establishing international AI governance standards and solutions. G7 agreement on a global minimum corporation tax rate bodes well for OECD digital tax discussions.

At the end of the day there are numerous notable omissions. Where is the commitment to a Bill to set up the new Digital Markets Unit, or tackling the gig economy in the many services run through digital applications?  The latter should be a major priority.

 


Lord C-J in the Corporate Financier on the National Security Investment Act and the new investment landscape

“People make a big mistake thinking macro American policy changes with the political parties. Not having Trump tweeting at 6am is wonderful, but Joe Biden is going to be as pragmatic over the American trade policy and is not going to suddenly rush into a deal with us.

“CFIUS has been around for an awfully long time. In a sense, you could say that our National Security and Investment Act is an imitation of CFIUS. America isn’t going to suddenly cave in on demands for agricultural imports, for instance, but again, it may be that the climate-change agenda will be more important to them.

“Then there’s the whole area of regulations. There is a rather different American culture to regulation, but now there’s more of an appetite for it around climate and the environment than there was under Trump.”

Here is the full article

Corporate Financier June 2021


These are the ingredients for good higher education governance

I recently took part in a session on "Higher education governance - the challenges that lie ahead and what can we do about it " at an AdvanceHE Clerks and Secretaries Network Event and shared my view on university governance. This is the blog I wrote afterwards which reflects what I said.

https://www.advance-he.ac.uk/news-and-views/ingredients-good-higher-education-governance

Chairing a Higher Education institution is a continual learning process and it was useful to reflect on governance in the run up to Advance HE’s recent discussion session with myself and Jane Hamilton, Chair of Council of the University of Essex.

Governance needs to be fit for purpose in terms of setting and adhering to a strategy for sustainable growth with a clear set of key strategic objectives and doing it by reference to a set of core values. And I entirely agree with Jane that behaviour and culture which reflect those values are as important as governance processes.

 

But the context is much more difficult than when I chaired the School of Pharmacy from 2008 when HEFCE was the regulator. Or even when I chaired UCL’s audit committee from 2012. The OfS is a different animal altogether and despite the assurance of autonomy in the Higher Education Act, it feels a more highly regulated and more prescribed environment than ever.

I was a Company Secretary of a FTSE 100 company for many years so I have some standard of comparison with the corporate sector! Current university governance, I believe, in addition to the strategic aspect, has two crucial overarching challenges.

First, particularly in the face of what some have described as the culture war, there is the crucial importance of making, and being able to demonstrate, public contribution through – for example – showing that:

  • We have widened access
  • We are a crucial component of social mobility, diversity and inclusion and enabling life chances
  • We provide value for money
  • We provide not just an excellent student experience but social capital and a pathway to employment as well
  • In relation to FE, we are complementary and not just the privileged sibling
  • We are making a contribution to post-COVID recovery in many different ways, and contributed to the ‘COVID effort’ through our expertise and voluntary activity in particular
  • We make a strong community contribution especially with our local schools
  • Our partnerships in research and research output make a significant difference.

All this of course needs to be much broader than simply the metrics in the Research Excellence and Knowledge Exchange Frameworks or the National Student Survey.

The second important challenge is managing risk in respect of the many issues that are thrown at us for example

  • Funding: Post pandemic funding, subject mix issues-arts funding in particular. The impact of overseas student recruitment dropping. National Security and Investment Act requirements reducing partnership opportunities. Loss of London weighting. Possible fee reduction following Augar Report recommendations
  • The implications of action on climate change
  • USS pension issues
  • Student welfare issues such as mental health and digital exclusion
  • Issues related to the Prevent programme
  • Ethical Investment in general, Fossil Fuels in particular
  • And, of course, freedom of speech issues brought to the fore by the recent Queen’s Speech.

This is not exhaustive as colleagues involved in higher education will testify! There is correspondingly a new emphasis on enhanced communication in both areas given what is at stake.

In a heavily regulated sector there is clearly a formal requirement for good governance in our institutions and processes and I think it’s true to say, without being complacent, that Covid lockdowns have tested these and shown that they are largely fit for purpose and able to respond in an agile way. We ourselves at Queen Mary, when going virtual, instituted a greater frequency of meetings and regular financial gateways to ensure the Council was fully on top of the changing risks. We will all, I know, want to take some of the innovations forward in new hybrid processes where they can be shown to contribute to engagement and inclusion.

But Covid has also demonstrated how important informal links are in terms of understanding perspectives and sharing ideas. Relationships are crucial and can’t be built and developed in formal meetings alone. This is particularly the case with student relations. Informal presentations by sabbaticals can reap great rewards in terms of insight and communication. More generally, it is clear that informal preparatory briefings for members can be of great benefit before key decisions are made in a formal meeting.

External members have a strong part to play in the expertise and perceptions they have, in the student employability agenda and the relationships they build within the academic community and harnessing these in constructive engagement is an essential part of informal governance.

So going forward what is and should be the state of university governance? There will clearly be the need for continued agility and there will be no let-up in the need to change and adapt to new challenges. KPI’s are an important governance discipline but we will need to review the relevance of KPIs at regular intervals. We will need to engage with an ever wider group of stakeholders, local, national and global. All of our ‘civic university’ credentials may need refreshing.

The culture will continue to be set by VCs to a large extent, but a frank and open “no surprises” approach can be promoted as part of the institution’s culture. VCs have become much more accountable than in the past. Fixed terms and 360 appraisals are increasingly the norm.

The student role in co-creation of courses and the educational experience is ever more crucial. The quality of that experience is core to the mission of HE institutions, so developing a creative approach to the rather anomalous separate responsibilities of senate and council is needed.

Diversity on the Council in every sense is fundamental so that there are different perspectives and constructive challenge to the leadership. 1-2-1s with all council members on a regular basis to gain feedback and talk about their contribution and aspirations are important. At Council meetings we need to hear from not just the VC, but the whole senior executive team and heads of school: distributed leadership is crucial.

Given these challenges, how do we attract the best council members? Should we pay external members? Committee chairs perhaps could receive attendance allowance type payments. But I would prefer it if members can be recruited who continue to want to serve out of a sense of mission.

This will very much depend on how the mission and values are shared and communicated. So we come back to strategic focus, and the central role of governance in delivering it!


Lord C-J: Government must resolve AI ethical issues in the Integrated Review

The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Integrated Review.

The House Live April 2021

https://www.politicshome.com/thehouse/article/government-must-resolve-ai-ethical-issues-in-the-integrated-review

 

 

The Lords recently debated the government’s Integrated Review set out in “Global Britain in a Competitive Age”. The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Review. So, I hope that the promised AI Strategy this autumn and a Defence AI Strategy this May will flesh these out, resolve some of the contradictions and tackle a number of key issues. Let me mark the government’s card in the meantime.

Commercialisation of our R&D in the UK is key but can be a real weakness. The government need to honour their pledge to the Science and Technology Committee to support Catapults to be more effective institutions as a critical part of Innovation strategy. Access to finance is also crucial. The Kalifa Review of UK Fintech recommends the delivery of a digital finance package that creates a new regulatory framework for emerging technology. What is the government’s response to his creative ideas? The pandemic has highlighted the need for public trust in data use.

"The pandemic has highlighted the need for public trust in data use"

Regarding skills, the nature of work will change radically and there will be a need for different jobs and skills. There is a great deal happening on high-end technical specialist skills. Turing Fellowships, Phd’s, conversion courses, An Office for Talent, a Global Talent Visa etc. As the AI Council Roadmap points out, the government needs to take steps to ensure that the general digital skills and digital literacy of the UK are brought up to speed. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.

Building national resilience by adopting a whole-of-society approach to risk assessment is welcome but in this context the government should heed the recent Alan Turing Institute report which emphasizes that access to reliable information particularly online is crucial to the ability of a democracy to coordinate effective collective action. New AI applications such as GPT3 the language generation system, can readily spread and amplify disinformation. How will the Online Safety legislation tackle this?

At the heart of building resilience must lie a comprehensive cyber strategy but the threat in the digital world is far wider than cyber. Hazards and threats can become more likely because of the development of technologies like AI and the transformations it will bring and how technologies interconnect to amplify them.

A core of our resilience is of course defence capability. A new Defence Centre for Artificial Intelligence is now being formed to accelerate adoption and a Defence AI strategy is promised next month. Its importance is reinforced in the Defence Command Paper, but there is a wholly inadequate approach to the control of lethal autonomous weapon systems or LAWS. Whilst there is a NATO definition of “automated” and “autonomous”, the MOD has no operative definition of or LAWS. Given that the most problematic aspect – autonomy – has been defined is an extraordinary state of affairs given that the UK is a founding member of the AI Partnership for Defence, created to “provide values-based global leadership in defence for policies and approaches in adopting AI.”

The Review talks of supporting the effective and ethical adoption of AI and data technologies and identifying international opportunities to collaborate on AI R&D ethics and regulation. At the same time, it talks of the limits of global governance with “competition over the development of rules, norms and standards.” How do the two statements square? We have seen the recent publication of the EU’s proposed Legal Framework for the risk-based regulation of AI. Will the government follow suit?

Regarding data, the government says it wants to see a continuing focus on interoperability and to champion the international flow of data and is setting up a new Central Digital and Data Office. But the pandemic has highlighted the need for public trust in data use. Will the National Data Strategy (NDS) recognize this and take on board the AI Council’s recommendations to build public trust for use of public data, through competition in data access, and responsible and trustworthy data governance frameworks?

 

Lord Clement-Jones is a Liberal Democrat member of the House of Lords, former Chair of the Lords Select Committee on AI and co-chair the APPG on AI.


Shirley Williams 1930-2021

So sad to lose Shirley Williams, such a warm, principled and inspiring political pioneer. The conscience of the Liberal Democrats


The Digital Regulation Cooperation Forum: Priorities for UK Digital Regulation

https://www.aldes.org.uk/the-drcf-priorities-for-uk-digital-regulation/

Last July the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO) and the Office of Communications (Ofcom) formed the Digital Regulation Cooperation Forum (DRCF) which last month outlined its priorities in a Workplan for 2021/22.

The creation of the DRCF is a significant move by these regulators in the coordination of regulation across digital and online services and, as they say, recognises the unique challenges posed by regulation of online platforms which are playing an increasingly important role in our lives.

The Workplan for 2021/22 is designed to set out what they fashionably call a roadmap for how Ofcom, the CMA, ICO and, from this month as a full member of the DRCF, the FCA, will increase the scope and scale of this coordination through “pooling expertise and resources, working more closely together on online regulatory matters of mutual importance, and reporting on results annually.”

In launching the Workplan, the DRCF invited comments and observations on its content. The challenge has been taken up by a number of Liberal Democrat colleagues in the Lords, coordinated by myself as Digital Spokesperson.

We wholly welcome the creation of the DRCF and the creation of a cross regulator Workplan. Cross collaboration between regulators is of great importance as technological, digital and online issues faced by regulators increasingly converge, especially in the light of the growing emergence of digital issues across the regulatory field, such as data use and access, consumer impact of algorithms and the advent of online harms legislation.

We suggested a number of additions to supplement what is essentially an excellent core plan with some other areas of activity which should be undertaken as part of the plan, resources permitting.

The context of the regulators’ work is provided by the overall aim of the Government to ensure “an inclusive, competitive and innovative digital economy” . To this we want to see the addition of the words “ethical, safe and trustworthy”.This is not only crucial in terms of public trust but very much in line with one of the announced three key pillars of the Government’s AI  Strategy to be published later this year by the Government and other aspects of the government’s National Data Strategy.

The key areas of activity set out in the Workplan are:

  • Responding strategically to industry and technological development;
  • Developing joined up regulatory approaches;
  • Building shared skills and capabilities
  • Building clarity through collective engagement
  • Developing the DRCF

Responding Strategically to industry and technological developments

A number of emerging trends and technological developments are referred to in the plan. These include broad areas such as design frameworks, algorithmic processing (and we must assume this includes algorithmic decision making), digital advertising technologies and end-to-end encryption.

There are significant implication from other emerging issues however which we also think require particular attention from the regulators such as Deepfakes, Live Facial Recognition, collection use and processing of behavioral data beyond advertising and devices (such as smart meters) which collect and share data and are part of the Internet of Things.

Developing joined up regulatory approaches

Here the Workplan suggests concentrating on Data Protection and Competition Regulation, the Age Appropriate Design Code (aka Children’s Code) introduced by the Data Protection Act, the regulation of video-sharing platforms and online safety and interactions in the wider digital regulation landscape.

We believe however, especially in the light of considerable debate about what are appropriate remedies for market dominance in the context of the digital economy and the growing power of Big Tech, that there should be cross regulator consideration of how proportionate but timely interim and ex ante intervention and structural versus behavioral remedies have application.

Given the importance of simultaneously encouraging innovation and trustworthy use and application of data and AI, we also thought there should be a focus on generic sandboxing approaches to innovative regulated services.

Building shared skills and capabilities

To some extent the Workplan in this area proposes a journey of exploration. It suggests building shared skills through, for instance, secondment programmes, collocated teams, building a shared centre of excellence and co-recruitment initiatives, but without specifying what those shared skills might be.

Last December, I chaired a House of Lords follow up enquiry to the AI Select Committee’s Report AI in the UK : Ready Willing and Able? and our report: AI in the UK: No Time for Complacency concluded and recommended as follows:

“60. The challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation. The understanding by users and policymakers needs to be developed through a better understanding of risk and how it can be assessed and mitigated…..

61. The ICO must develop a training course for use by regulators to ensure that their staff have a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. It will be essential for sector specific regulators to be in a position to evaluate those risks, to assess ethical compliance, and to advise their sectors accordingly….”

In addition to these proposals on upskilling on risk assessment and mitigation and the ethical and appropriate use of public data, we have now suggested that, as part of the Workplan, a wider set of skills could be built across the regulators, particularly in the AI space, such as assessments of use of behavioral data in advertising and of ethical compliance of AI systems, Algorithm inspection, AI audit and monitoring and evaluation of Digital ID and Age Verification solutions.

As regards technical expertise available to the DRCF we took the view that it is important that the Alan Turing Institute, the Centre for Data Ethics and Innovation and the Intellectual Property Office are closely involved with the work programme and the DRCF makes full use of their skills and knowledge.

Given the depth and technical nature of many of the skills required, the proposal for a centre of excellence to provide common expertise in digital and technological issues which could draw upon and be of service to all relevant regulators, is vital and should be a resource priority.

As regards the last two priority activities Building Clarity Through Collective Engagement and DRCF development we are strong supporters of the intention to draw upon international expertise. The EU, the Council of Europe and the OECD are all building valuable expertise in AI risk assessment by reference to ethical principles and human rights.

We also welcomed the regulators’ intention to update and review the current MOU’s between the regulators to ensure greater transparency and cooperation.

Other, non-statutory regulators,however, have a strong interest in the objectives and outcomes of the DRCF too. There is welcome reference in the Workplan to the Advertising Standards Authority involvement with the DRCF  but as regards other relevant regulators such as the Press Representation Council and BBFC we want to see them included in the work of the DRCF given their involvement in digital platforms.

In addition, there is the question of whether other statutory regulators which have an interest in digital and data issues such as OFWAT or OFGEM with smart meters, or the CAA with drone technology for example will be included in the DRCF discussions and training.

We even think there is a good case for the DRCF to share and spread knowledge and expertise, as it develops, to the wider ADR and ombudsman ecosystem and even to the wider legal system in the light of the new Master of Rolls, Sir Geoffrey Vos’, recent espousal of AI in the justice system.

It may not yet have great public profile but how and the speed at which the work of the DRCF unfolds, matters to us all.