UK approach to AI regulation

21.06.29|Jason Smith

In March 2021, the UK’s Digital Secretary, Oliver Dowden, announced the UK’s forthcoming National Artificial Intelligence (AI) Strategy. The Strategy, which is due to be published later this year, will seek to establish the UK as a global centre for the development, commercialisation and adoption of responsible AI. As part of the UK focus on it’s 10 tech priorities, the new AI Strategy will focus on: Growth of the economy through widespread use of AI technologies; Ethical, safe and trustworthy development of responsible AI; Resilience in the face of change through an emphasis on skills, talent and research & development (R&D).

Read below for an interview with AI Sustainability Center senior advisor, Jason Smith, for more information about the future of AI regulation in the UK.

What contributed to the development of the UK AI Strategy? What’s the background?
The UK’s AI Strategy stems from work originally done in the House of Lords. In April 2018, the House of Lords published its first report on AI in the UK, entitled ‘Ready, Willing & Able’. In December 2020, it published a follow-up report, which called on the Government to create a comprehensive AI Strategy. One of the recommendations was that the Centre for Data Ethics and Innovation (CDEI) create and publish national standards for the ethical development and deployment of AI. It will be interesting to see what role, if any, CDEI play in the new AI Strategy.

The announcement and development of the AI Strategy also takes up a recommendation by the UK AI Council (an expert committee advising the UK government on the AI ecosystem) set out in its AI Roadmap dated January 2021, which called for the development of such a strategy.

The aims of the AI Roadmap are twofold. First, the AI Roadmap states that it is necessary to “double-down” on recent investments that the UK has made in AI, in a call for continued funding of the area. The second principle underpinning the AI Roadmap advocates that support for AI should reflect the rapidity with which the science and technology in AI are developing, in order to be adaptable to disruption. The approach is one that seeks to ensure that the UK is at the forefront of integrating approaches to ethics, security and social impacts in the development of AI in coming decades. This is seen as a necessary step to foster “full confidence in AI across society.”

What is most likely to be the UK approach?
It’s fair to say that it is now generally accepted that AI needs to be regulated because it can make decisions which affect human lives and there needs to be confidence that those decisions are made safely, ethically and free from bias and discrimination. The debate now seems to focus on what the regulation of AI should look like and how it should work. Should it be restrictive even to the extent of prohibiting certain uses of AI from the outset with the aim of protecting consumers? Or should it be light touch to enable innovation but give consumers as much information as possible so they can understand how the AI works, and the data it is using and then potentially object to a decision made by the AI?

The EU has clearly gone for the first option in its draft AI regulations. It defines what ‘high-risk’ AI is and sets out a system for the registration of stand-alone high-risk AI applications in a public EU-wide database. AI providers must offer ‘meaningful information’ about systems and prepare conformity assessments.

It isn’t yet apparent which approach the UK will follow. The protection of consumers and the desire to prevent or correct any bias in an AI system will undoubtedly be important objectives for any government or regulator seeking to limit the potential harms of an AI system. The UK may well follow the approach suggested in the House of Lords December 2020 report which suggested different regulators would each address issues specific to their sector in coordination with each other, rather than adopt the EU’s cross-cutting approach.

Interestingly the US seems to be taking a different approach again. In the FTC’s April 2021 blog it calls on those building AI systems to build in ‘truth, fairness and equity’ from the start – almost an ‘ethical by design’ approach without prohibiting particular systems.

What does all this mean?
Whichever approach is taken they are all likely to have one thing in common – a requirement for an AI systems provider to be transparent about how the AI works and makes decisions and on what basis.

Organisations can prepare by adopting the following measures:

  • ensuring Boards and senior management are fully briefed on the use of AI and the data it is using;
  • thinking through how they explain publicly about the AI used in the business/products/services; and
  • ensuring their risk profile, systems and governance address the risks that AI brings and what they do if it falls within the EU’s ‘high-risk’ categorisation.

The conclusion from all of this work is that although regulatory frameworks are yet to be finalised (in the EU) or even formally defined (outside the EU) there can be no doubt that the regulation of the use and design of AI is heading our way.