Regulation of AI-Based Applications: The Inevitable New Frontier
Andrew Pery

By: Andrew Pery on November 12th, 2019

Print/Save as PDF

Regulation of AI-Based Applications: The Inevitable New Frontier

Privacy  |  Information Security  |  Artificial Intelligence (AI)

According to the 2019 IDC study of spending on Artificial Intelligence (AI), it's estimated to reach $35.8 billion in 2019 and is expected to double by 2022 to $ 79.2 billion representing an annual growth rate of 38% for the period 2018-2022. The economic benefits and utility of AI technologies are clear and compelling. No doubt, applications of AI may address some of the most vexing social challenges such as health, the environment, economic empowerment, education, and infrastructure. At the same time, as AI technologies become more pervasive, they may be misused and, in the absence of increased transparency and proactive disclosures, create ethical and legal gaps. Increased regulation may be the only way to address such gaps.

AI in the Courts

Just recently, my colleague and co-author of this article Mike Simon and I had the opportunity to attend the annual AI Now Institute Symposium at New York University, focusing on the socio-economic impacts of AI technologies. The Symposium theme this year focused on the "Growing Pushback Against Harmful AI." We were particularly struck by a session on the AI-based Michigan Integrated Data Automated System (MiDAS), and how the inaccuracy of their AI algorithms got them into some hot water. MiDAS had used AI to replace their human investigators to improve efficiencies in determining fraudulent unemployment claims. Unfortunately due to a lack of human verification, the AI algorithms used erroneously flagged over 30,000 claimants for unemployment. Jennifer Lord, the attorney acting on behalf of plaintiffs in a class-action suit, commented that the “faulty algorithms used resulted in thousands of claimants filing for bankruptcy, losing homes, and unable to pass credit checks." Other attorneys involved in the matter blamed MiDAS for even worse: “We have, I think, two suicides.”


Get Your Free eBook: How to Fit Artificial Intelligence into Your Information  Management Strategy


The scope of the harmful impacts AI algorithms was documented by the AI Now Institute 2019 Report: “Litigating Algorithms, New Challenges to Government Use of Algorithmic Decision Systems.” The cases cited in this report have one common theme, that is, a conspicuous absence of transparency in the implementation of algorithmic decision systems that profoundly impact the lives of people. For example, the report refers to the implementation of an automated decision system formula by the State of Idaho designed to automate the determination of eligibility to receive disability services for adults with intellectual and developmental disabilities. The system caused a significant drop in the funds received by qualified recipients of the program. At trial, the State’s claim that it could not disclose how the algorithm calculated the allocation of funds on the ground that it was a trade secret was rejected by the court. The court then ordered the State to disclose the formula, to remedy the flaws in the algorithm, and to develop higher standards of transparency and accountability in its application to ensure it allocated funds in an equitable manner.

The Effort to Protect Consumer Rights

Such harmful consequences of AI technologies bring into perspective the urgent need for greater accountability and oversight to protect consumer rights. Voices for the regulation of AI-based applications are gaining volume, given the control that corporations have in how AI is used. Businesses are certainly benefiting at the expense of consumers, “especially in the realm of technology where incentives for corporations can sometimes be to move fast and break things rather than to be overly thoughtful.” There must be a balance between promoting AI innovation, its social utility, and safeguarding consumer rights, even though it is “uncharted territory for an age that is passing the baton from human leadership to machine learning emergence, automation, robotic manufacturing and deep learning reliance.

Policymakers are recognizing the need to act. The EU is at the vanguard of legislative action. As early as 2017, the Legal Affairs Committee of European Parliament on Civil Law Rules on Robotics urged the regulation of AI technologies on the basis that:

"humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence (AI) seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched…"

The Committee recommended a series of comprehensive measures to regulate the application and use of robots including:

  • “Union system of registration of advanced robots”;
  • “Development of robot technology should focus on complementing human capabilities and not on replacing them;
  • Creating a principle of transparency, namely that it should always be possible to supply the rationale behind any decision taken with the aid of AI; and
  • Enforcing an ethical framework based on the principles of beneficence, non-maleficence, autonomy and justice, on the principles and values enshrined in Article 2 of the Treaty on European Union and the Charter of Fundamental Rights, such as human dignity, equality, justice and equity, non-discrimination, informed consent, private and family life, and data protection.

The EU General Data Protection Regulation (GDPR) has, in essence, enshrined enhanced protection of privacy rights with respect to the application of automated decision making:

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The European Commission has further re-enforced the importance of transparency through seven key principles, the most significant of which is to require that companies using AI systems should be transparent with the public and that “people need to be informed when they are in contact with an algorithm and not another human being,"

In the U.S. context, California recently passed a bold piece of legislation: “Bolstering Online Transparency," or B.O.T. bill which became effective July 1 of this year. The legislation requires chatbot (a/k/a “bot”) application developers and vendors deploying them to “declare” their use in a clear and conspicuous manner. The bill also prohibits communication with consumers “with the intent to mislead…about [the bot’s] artificial identity for the purpose of knowingly deceiving the person about the content of the communication” in order to incentivize a purchase or influence a vote.”

While the California legislation is one of the first attempts to regulate the use of AI technology, it is deemed to fall short of expectations in two respects:

  • Surprisingly, the legislation does not provide for penalties in the event of an infringement. It is possible to bring an action under California’s Unfair Competition Law, which “gives the state Attorney General broad enforcement authority to levy fines of up to $2,500 per violation, as well as equitable remedies,” but the lack of a direct remedy in the bill is disappointing.
  • Even more problematic is the fact that the legislation does not mandate platform providers to provide disclosures relating to the use of bots. While the initial version did impose such an obligation, the final version removed it and instead imposed such an obligation on the creators of bots. As one observer put it “Can we really expect individuals who build malign bots to voluntarily identify their creations as such?”

As such, the legislation will certainly move the needle toward much needed regulatory oversight. “Without requiring online platforms to have a stake in the successful enforcement of the B.O.T. Bill, the California government will be left to its own devices and resources, which are already thinly spread. Removing all responsibility from online platforms significantly reduces the bill’s chances to be successful.

There is a further legal issue that may be impacting the constitutionality of the California legislation. Some argue that its provisions infringe on the First Amendment. The Electronic Frontier Foundation alleges that the legislation’s requirements to disclose the humans who create the bots unduly restraints internet speech which the Supreme Court upheld as protected anonymous speech.

The counter-argument is that the California legislation does not constrain the use of bots but rather the “time, place, and manner” in which they may be used, a type of restraint subject to weaker scrutiny under the First Amendment. Informing consumers that they are interacting with a BOT is in the public interest to balance its utility with safeguarding consumer interests.

Ultimately, it seems difficult to argue regulation of AI is anything other than inevitable and necessary. Oren Etzioni, a leading authority on AI has postulated five key imperatives for the regulation of AI:

  • “Set up regulations against AI-enabled weaponry and cyberweapons”;
  • “AI must be subject to the full gamut of laws that apply to its human operator”;
  • “AI shall clearly disclose that it is not human.”;
  • “AI shall not retain or disclose confidential information without explicit prior approval from the source.”; and
  • “AI must not increase any bias that already exists in our systems.”

Conclusion

Failure to impose regulatory oversight over the use of AI may lead to a dystopian world like the one so starkly painted by Stanley Kubrick's classic movie, 2001: A Space Odyssey, in which the HAL 9000 computer refused the commands of its human masters: “I’m sorry Dave, I’m afraid I can’t do that”. Elon Musk warned against failure to act by regulating AI which may lead "a war by doing Fake News and spoofing email accounts and fake information, and just by manipulating information.”

 

Free eBook: How to Fit Artificial Intelligence into Your Information Management Strategy

About Andrew Pery

Andrew Pery is a marketing executive with over 25 years of experience in the high technology sector focusing on content management and business process automation. Currenly Andrew is CMO of Top Image Systems.  Andrew holds a Masters of Law degree with Distinction from Northwestern University is a Certified Information Privacy Professional (CIPP/C) and a Certified Information Professional (CIP/AIIM).