CIPL Organizes Webinar on EU Approach to Regulating AI and Regulatory Experimentation
Time 5 Minute Read

On March 25, 2021, the Centre for Information Policy Leadership at Hunton Andrews Kurth organized an expert roundtable on the EU Approach to Regulating AI–How Can Experimentation Help Bridge Innovation and Regulation? (the “Roundtable”). The Roundtable was hosted by Dragoș Tudorache, Member of Parliament and Chair of the Artificial Intelligence in the Digital Age (“AIDA”) Committee of the European Parliament.  The Roundtable gathered industry representatives and data protection authorities (“DPAs”) as well Axel Voss, Rapporteur of the AIDA Committee.

The panelists explored how experimentation methodologies such as policy prototyping and regulatory sandboxes, can help create the right rules and frameworks and interpret them constructively to regulate AI in a way that enables responsible innovation and risk mitigation, while still allowing for honest error and constant improvement.

Dragoș Tudorache opened the Roundtable by underscoring the need for states and governments to be up to the task of effective lawmaking that enables both innovation and social order, especially in this era of fast digital transformation. Policy prototyping and regulatory sandboxes could provide for genuine cooperation and co-creation of the rules of the game between lawmakers, regulators and industry. As policy makers, we need to ensure, however, that the results produce social good and remain unbiased.

The Roundtable successively presented the policy prototyping and regulatory sandboxes methodologies, which operate in different contexts. While regulatory sandboxes operate in the context of existing legislation to test specific innovative products under the supervision of a regulator, policy prototyping operates where a new regulatory framework or policy is being contemplated to test a prototype in real conditions and to inform the creation of this new regulatory framework or policy. Policy prototyping also helps identify the limitations of the prototype to ultimately make recommendations regarding how legislation can successfully be drafted. Regulatory sandboxes and policy prototyping are chronologically complementary; once the legislation is enacted, sandboxes allow for continued testing of the legislation. The need for experimentation in digital law making will become even more essential as different types of legal frameworks come into play and may conflict with each other.

In the context of AI, policy prototyping has been used to test the effectiveness of the AI risk assessments—meaning the assessments startups perform on their AI products to identify and assess the likelihood and severity of harm to individuals and society. The earlier the risks for bias or lack of transparency are identified, the better these risks can be addressed and proper mitigation can be built into AI products, which can sometimes accelerate startups’ go-to-market strategy.

The Roundtable also highlighted the need to assess and monitor the adaptation of the rules to AI uses as new risks and challenges continue to appear during the product deployment and use. This requires a close and ongoing collaboration between legal, privacy and innovation teams within companies to mitigate the risks and implement effective privacy by design policies and procedures. The Roundtable emphasized that a multi-stakeholder approach also is key to including different perspectives from data scientists and consumers panels before making decisions on how AI products are built.

The second part of the Roundtable focused on how regulatory sandboxes—that provide for a supervised safe space set up by a regulator for piloting and testing innovative products—can bring assurances that innovation is taking place in a responsible and accountable manner. Regulatory sandbox projects are currently underway with the Norwegian and French DPAs. Regulatory sandboxes help companies better understand the requirements of the EU General Data Protection Regulation  by reducing grey areas and overcoming regulatory barriers to move forward with beneficial AI products and uses. They also can help strengthen DPAs understanding of AI, which is needed when they perform audits or undertake enforcement actions. Transparency, minimization and fairness are often discussed during the sandbox. The results of the sandbox may be shared through regulatory guidance, blog posts or workshops to widely communicate best practices and lessons learned from a specific case. Key success factors of a regulatory sandbox include (1) the need for clear rules for engagement between the regulator and the sandbox participant; (2) sufficient resources from both parties; (3) open collaboration; (4) sharing of information; and (5) the freedom to challenge the views of the other party.

In his remarks, Axel Voss expressed full support for using experimentation to bridge innovation and legislation, believing that the EU needs faster regulatory outcomes as compared to traditional lawmaking to be able to compete internationally. The EU also needs experimentation through sandboxes to develop AI that is trustworthy, human-centric, secure, unbiased, environmentally friendly and sustainable, as well as to provide access to data and build data spaces.

In closing remarks, Dragoș Tudorache reinforced the need to move to a concept of staged lawmaking for regulating technologies, starting with prototyping followed by adaptation over time. He will make a recommendation to the European Commission to consider regulatory sandboxing in its upcoming AI regulation.

To learn more about CIPL’s work on smart regulation and AI, please contact Michelle Marcoot at mmarcoot@HuntonAK.com.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page