Norton Rose Fulbright - Data Protection Report blog

On 21 January 2020 at a meeting at the World Economic Forum, the Personal Data Protection Commission (PDPC) and Infocommunications Media Development Authority (IMDA) released the second edition of its Model Artificial Intelligence (“AI”) Governance Framework (“Model Framework”).

Background – the Model Framework

The Model Framework, which was first released in January 2019, is a voluntary set of compliance, ethical principles and governance considerations and recommendations that can be adopted by organisations when deploying AI technologies at scale.  It is not legally binding.

At the heart of the Model Framework are two high-level guiding principles:

(1)        organisations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair; and

(2)        AI solutions should be human-centric.

The Model Framework provides guidance on the following four key governance areas:

  • Internal governance structure and measures: adapting existing or setting up internal governance structures and measures to incorporate values, risks, and responsibilities relating to algorithmic decision-making, which includes delineating clear roles and responsibilities for AI governance within an organisation, processes and procedures to manage risks, and staff training.
  • Determining the level of human involvement in AI-augmented decision-making: based on the assessment of risks and identifying an appropriate level of human involvement in the process in order to minimise the risk of harm to individuals.
  • Operations management: issues to be considered when developing, selecting and maintaining AI models, including data management.
  • Stakeholder interaction and communication: strategies for communicating with an organisation’s stakeholders concerning the use of AI.

The Model Framework is intended to be broadly applicable – it is algorithm-agnostic, technology-agnostic, sector-agnostic and scale-and-business-model-agnostic.  Consequently, it can be adopted across all industries and businesses, regardless of the specific AI solution or technology involved.

The Second Edition – Key Updates

Key updates introduced include:

(1) Inclusion of industry examples

 The Model Framework now includes real-life industry examples in each of the four key governance areas, demonstrating effective implementation by organisations. Such examples are drawn from a variety of industries – ranging from banking and finance, healthcare to technology and transportation, and are based on different use cases, thereby reinforcing the neutral and flexible nature of the framework. 

(2) Additional tools to enhance the usability of the Model Framework

 In addition to the Model Framework, IMDA and PDPC have also concurrently released two additional documents to guide organisations in adopting the Model Framework:

  1. The Implementation and Self-Assessment Guide for Organisations (ISAGO); and
  2. The Compendium of Use Cases.

These documents can be accessed here.

The ISAGO was jointly developed by the IMDA, the PDPC and the World Economic Forum Centre for the Fourth Industrial Revolution, with input from the industry. It is designed to be a DIY guide for organisations seeking to implement the Model Framework and identify potential gaps in their existing AI governance framework.

The Compendium of Use Cases sets out various case studies of organisations which have operationalised the principles from the Model Framework. The case studies showcase the flexibility of the Model Framework, which can be adapted according to the needs and priorities of the different organisations. 

(3) The inclusion of new measures

 The following new measures have been introduced into the Model Framework:

  • Robustness refers to “the ability of a computer system to cope with errors during execution and erroneous input, and is assessed by the degree to which a system or component can function correctly in the presence of invalid input or stressful environmental conditions”.
  • Reproducibility refers to “the ability of an independent verification team to produce the same results using the same AI method based on the documentation made by the organisation”. This is different from repeatability, which refers to “the internal repetition of results within one’s organisation”.
  • Auditability refers to “the readiness of an AI system to undergo an assessment of its algorithms, data and design processes”.

These three measures are aimed at helping organisations enhance the transparency of the algorithms used in AI models. Industry examples and further elaboration on how to implement these three measures can be found under the “Operations Management” section of the Model Framework.

(4) Other changes

 Other useful changes introduced in the Second Edition of the Model Framework include: 

  • Clarifying the concept of “human-over-the-loop” by explaining the supervisory role to be performed by a human in AI-augmented decision-making.
  • Clarifying that organisations can consider factors such as the nature of harm (i.e. whether the harm is physical or intangible in nature), reversibility of harm (i.e. whether recourse is easily available to the affected party) and operational feasibility in determining the level of human involvement in such AI-augmented decision-making processes.
  • Providing suggestions on the level of information to be provided when interacting with various stakeholders to build trust such as including information on how AI is used in decision-making, how the organisation has mitigated risks and an appropriate channel to contest decisions made by AI.
  • Providing further guidance to organisation on how to adopt a risk-based approach to implementing AI governance measures.

Conclusion

The PDPC encourages feedback from organisations that adopt and apply the principles in the Model Framework.  It is a “living document” which would evolve alongside with the fast-paced changes in technology and the digital economy, as well as feedback from organisations that adopt the Model Framework.

This Second Edition of the Model Framework is the result of feedback from organisations that have adopted the Model Framework (which is an impressive list including large technology companies and global financial institutions) and feedback from Singapore’s participation in leading international platforms such as the European Commission’s High-Level Expert Group and the OECD Expert Group on AI.  With the inclusion of such valuable feedback and the inclusion of real-life industry examples illustrating companies’ application of the key governance areas, this Second Edition of the Model Framework will provide even greater practical guidance to organisations seeking to implement the Model Framework when deploying AI-solutions.

Whilst the Model Framework does not impose any binding legal or regulatory obligations, organisations that intend to deploy AI-solutions at scale should consider and apply the Model Framework. This is because adherence to the Model Framework will assist an organisation in demonstrating that it has implemented accountability-based principles in data management and protection (e.g. personal data protection obligations under the Singapore Personal Data Protection Act 2012) as it is an accountability-based governance framework.

The authors would like to thank Ji En Lee, associate at Ascendant Legal LCC, for his contribution to this article.