At last, UK Government publishes its White Paper on AI – “A pro-innovation approach to AI regulation” – an opportune start, but as expected, a framework with detail to follow…

The Department for Science, Innovation and Technology, has finally published its AI regulation white paper (the ‘White Paper’). Here are the key elements:

  • As is apparent from its title, the primary theme is that the framework must not stifle innovation (and must drive growth and prosperity).  It must also enable consumer trust in the use of AI and “strengthen the UK’s position as a global leader in AI”.
  • The White Paper proposes a principles based framework (the ‘Framework’) to be applied by regulators across all sectors.
  • It appears that the principles will be translated into concrete guidance issued by regulators (eg ICO, FCA, OFCOM) in between 6 and 12 months’ time. The legal basis for the guidance will be derived from existing regulatory and legal principles.
  • The Framework is non-statutory but the Government will legislate sparingly and parliamentary time permitting, after the initial 12 months, if necessary. The Government does anticipate it will need to introduce a statutory duty for regulators to have regard to the principles.
  • The White Paper cites the regulators’ ability to tailor and apply the framework proportionately and sympathetically to different sectoral requirements; but acknowledges and initiates a lot of central government effort towards coordinating and ensuring consistency of approach between regulators.
  • Due to the extraordinary pace of change in the sector, the non-statutory flexible approach is intended to be built up through guidance that can be iterated and amended more easily; this pragmatism bypasses the legislative process. However, it potentially buys R&D advantage because, as a minimum, facilitative guidance should emerge faster, whilst jurisdictions such as the EU debate the finer ethical boundaries to be included in legislation (which the UK can conform to later if desirable).

What AI will it apply to?

  • The Framework defines AI by reference to two characteristics that generate the need for a bespoke regulatory response (this narrows its application considerably):
    • Adaptivity – the fact that training allows it to infer patterns often not easily discernible to humans or envisioned by the AI’s programmers.
    • Autonomy – some AI systems can make decisions without the express intent or ongoing control of human.
  • On top of this, the intention appears to be to regulate proportionately depending on the risks generated; although there is no attempt to pre-define high risk uses in the white paper.

What are the principles set out in the White Paper?

  • 1. Safety, security and robustness
    • The Framework requires regulators to consider and incorporate into their guidance various technical standards and practices, such as the UK National Cyber Security Centre Principles for the security of machine learning.
  • 2. Appropriate transparency and explainability
    • The Framework defines transparency as instructions about when and how to use an AI system and explainability as the ability to understand and interpret the decision making process, again referencing the use of international technical standards such as IEEE7001, ISO TS 6254, ISO 12792.
  • 3. Fairness
    • The Framework does not seek to extend this context beyond legally required fairness (ie it does not stray into pure ethics or wider societal impacts). It anticipates that regulators may need to develop and publish descriptions and illustrations of fairness that apply within their regulatory remit together with reference to international technical standards.
  • 4. Accountability and governance
    • The Framework sets out the need to ensure and demonstrate effective and appropriate oversight, again featuring  references to international technical standards.
  • 5. Contestability and redress
    • The Framework requires regulators to guide regulated entities to make clear routes for affected parties to contest harmful AI outcomes as needed (without actually creating any new rights to contest or obtain redress at this stage).

How will regulators apply these principles?

  • Regulators must provide guidance as to how the principles interact with (and can be underpinned by) existing legislation and illustrate what compliance looks like.
  • They must produce joint guidance with other regulators where appropriate.

What will central government do?

  •  The White Paper explains that the Government will give guidance to the regulators as to how to apply the principles, including in relation to the risks posed in particular contexts, what measures should be applied to mitigate the risks, development of joint guidance and proactive collaboration with central governments and the monitoring and evaluation of the effectiveness of the framework.
  • Via a “central suite of functions”, the Government will support the roll-out of the framework through:
    • Central monitoring and evaluation of the effectiveness of the framework;
    • Central regulatory guidance to support the implementation of the principles (to resolve unjustifiable divergences and identify any regulatory gaps);
    • Central known and emerging risk assessment and horizon scanning;
    • Supporting innovators to navigate regulatory complexity (including through sandboxes – possibly multi-regulator and multi-sector, but starting with a single sector but multi-regulator pilot);
    • Providing guidance to businesses and consumers to navigate the regulatory landscape;
    • Ensuring interoperability with international regulatory frameworks; and
    • Considering whether existing cooperative fora between UK regulators (eg the Digital Regulatory Cooperation Forum) are sufficient or whether new ones are required.
  • The Government will not change the UK civil liability scheme for AI at this point but it will monitor it.
  • It will closely monitor foundation models (such as LLMs) but it does not propose any specific interventions at this point.
  • It will launch a portfolio of AI assurance techniques in Spring 2023.It envisages third party assurance to commonly understood standards as being critical to supporting the framework. It has already established the UK AI Standards Hub to champion the use of technical standards. It envisages layers of technical standards: (1) sector agnostic standards (eg risk management systems) (2) specific issue standards (eg transparency or bias) and (3) sector specific standards, being promoted by the regulators in different contexts.

Timing

  •  The White Paper is open for consultation until 21 June 2023.
  • In the next 6 months the following implementation steps are anticipated:
    • Consultation engagement and government response;
    • Government to issue principles together with the initial implementation guidance to regulators; and
    • Design and publish how central functions will be delivered, including through existing  initiatives and the piloting of a new AI sandbox.
  • And in the next 12 months:
    • Agree the central functions and the monitoring metrics for further iteration of the framework;
    • Key regulators to publish guidance on how the principles will apply within their remit; and
    • Continue to develop the AI sandbox.
  • And thereafter:
    • Implement the central functions;
    • Publish a central, cross economy AI risk register for consultation; and
    • Publish a report on how the principles and central functions are working, including whether further iteration or statutory intervention is necessary.

Our take

The UK’s approach is agile and pragmatic and allows it to integrate concepts that are working from other legislative initiatives such as the EU AI Regulation in updates to the guidance. The true strictness of the UK’s approach will not become apparent until the first iteration of the guidance emerges in 6 to 12 months time (no meaningful detail is provided in the white paper); however given that the Government does not want to stifle innovation and that banning practices outright may not be possible through extension of existing data protection, financial services or consumer protection principles (and new legislation is not favoured) – it seems unlikely the guidance will be unduly restrictive.