Ethical Use of Data for Training Machine Learning Technology - Part 2
Andrew Pery

By: Andrew Pery on January 28th, 2020

Print/Save as PDF

Ethical Use of Data for Training Machine Learning Technology - Part 2

Machine Learning  |  Artificial Intelligence (AI)

This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series.

Part 2: The Ethical and Legal Challenges of AI

The AI technology bias and its potentially unintended consequences is gaining the attention of policymakers, technology companies, and civil liberties groups. In a recent article based upon an ABA Business Law Section Panel: Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges? The panelist-authors noted that:

“With artificial intelligence, we are no longer programming algorithms ourselves. Instead, we are asking a machine to make inferences and conclusions for us. Generally, these processes require large data sets to “train” the computer. What happens when we use a data set that contains biases? What happens when we use a data set for a new purpose? What happens when we identify correlations that reinforce existing societal norms that we are actually trying to change? In these instances, we may inadvertently teach the computer to replicate existing deficiencies -- or we may introduce new biases into the system. From this point of view, system design and testing needs to uncover problems that may be introduced with the use of new technology.”

So, what are the ethical considerations relating to a fair and transparent application of AI technology?

First, there must be an acknowledgment that AI technology requires a holistic approach. It’s not just technology or just conformance with specific laws but as Dr. Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge said about a recent report from these organizations: “AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws - all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?”

A useful predicate for assessing the ethics associated with AI is to ask the question: can we trust AI?:

“The answer seems to point towards human input: in the words of AI researcher Professor Joanna Bryson, ‘if the underlying data reflects stereotypes, or if you train AI from human culture, you will find bias.’ And if we’re not careful, we risk integrating that bias into the computer programs that are fast taking over the running of everything from hospitals to schools to prisons – programs that are supposed to eliminate those biases in the first place.”

Awareness of bias in AI applications is an important consideration in developing transparency in ensuring adherence to ethical standards. We can see efforts being made, to varying degrees, to recognize and deal with issues of bias by governments, the technology industry, and by independent organizations. The current leader of such efforts is, by far, the EU and its member states.

The Ethical and Legal Challenges of AI Image 2

The E.U. sees “Ethical AI” as a Competitive Advantage

There is some momentum within the E.U. toward more rigorous and enforceable ethics guidelines for trustworthy AI The E.U. has commissioned an expert panel that has published its initial draft guidelines for the ethical use of AI: “The use of artificial intelligence, like the use of all technology, must always be aligned with our core values and uphold fundamental rights.” As Fortune Magazine has described it, the European Commission (the executive body for the E.U.) sees ethical AI as a competitive issue; Andrus Ansip, the European Commission’s vice president for digital matters, stated that “Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”


Get Your Free eBook: How to Fit Artificial Intelligence into Your Information  Management Strategy


The E.U. has had a long history of respecting and protecting privacy rights, with the most recent and powerful expression thereof being the General Data Protection Regulation (GDPR) that became effective on May 25, 2018. Near the end of GDPR Chapter 3, which contains the articles providing for individual access rights to data held by organizations – including the well-known and much-feared “right to erasure” of personal data – is something of a “sleeper” provision: Article 22, “Automated individual decision-making, including profiling.”

  1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

Article 22 thus potentially provides any individual subject to automated decision-making or profiling with the right to opt-out or contest such decision and also to avoid the use of any “special categories of personal data” referred to in Article 9(1), which includes:

“. . . personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation . . .”

As such, Article 22 embeds the fundamental E.U. principal of non-discrimination, which goes back to Articles 18-25 of the Treaty on the Functioning of the European Union, ratified in 2007, Article 21 of the Charter of Fundamental Rights of the European Union, ratified in 2000, and even all the way back to Article 14 of the European Convention on Human Rights, ratified in 1953.

On its face, Article 22 provides for a general prohibition against a decision made solely by automated processing. However, there are potentially problematic interpretations of the meaning and language of Article 22, particularly what constitutes a “decision,” what is an acceptable level of “human” involvement in the “decision” and what specific legal impacts are contemplated.

The Article 29 Working Party, an advisory group that was established by prior E.U. directives to interpret privacy regulations, issued a series of draft guidelines on some of the GDPR articles, including Article 22, prior to the implementation of the GDPR. While the Article 29 Working Party interpreted Article 22 as barring all forms of automated decision-making that did not involve some form of human intervention, it nonetheless created some wiggle room, namely that the application of Article 22 should take into consideration a balancing test including that a human “decision” ought not be “fabricated,” it must have an “actual influence on the results” and that such as “decision should be carried out by someone who has the authority and competence to change the decision.”

Such controversies will need to be worked out by the Data Protection Authorities (the local E.U. member state privacy regulators) and before the courts. It is worth noting that the provisions of the GDPR transformed the Article 29 Working Party from an influential, but ultimately powerless advisory group to the European Data Protection Board (EDPB), described by one law firm as “The EDPB has a much-enhanced status. It is not merely an advisory committee, but an independent body of the European Union with its own legal personality.” Thus, whether one agrees with the guidelines on Article 22 or not, they cannot be hand-waved away. Hence, our only honest answer when asked what we can see as the potential impact of this Article is “it’s too early to tell.”

Some, including researchers at the Oxford Internet Institute, see Article 22 as also creating a “right to explanation” for AI systems that would require technology companies to reveal algorithms and source code. Experts at Stanford Law School’s CodeX not only see this “right to explanation” clearly within Article 22, but they also foresee it as creating a new world of “data auditing methodologies designed to safeguard against algorithmic bias throughout the entire product life cycle [that] will likely become the new norm for promoting compliance in automated systems.” Other experts, including some also at the Oxford Internet Institute, strongly disagree, arguing that Article 22 was never intended to create such a right and that it instead “runs the risk of being toothless.”

However appealing as this new right may sound, there are also very serious practical issues with implementing it. Rich Caruana, a Microsoft researcher who has worked to better understand the internal mechanisms of algorithms, has identified a major hurdle towards implementation “as appealing as ‘transparency’ may sound, there’s no easy way to unpack the algorithm inside AI software in a way that makes sense to people. . . “Most of the time, we wouldn’t know how to do it.” But one can suspect that such pessimism might be more due to the desire to avoid such a potentially difficult task than to its impossibility; as admitted by one AI start-up CEO: “It’s not impossible, . . . But it’s complicated.” More importantly, such full transparency may not even be necessary, as some of the experts who believe that there is no current “right to explanation” promote the creation of this right using methodologies that do not require “opening the black box” of companies’ tightly-held intellectual property.

The Ethical and Legal Challenges of AI Image 1

The "AI Code" Recommendations

More locally within the E.U., the U.K. House of Lords Select Committee on Artificial Intelligence has suggested that an “AI Code” covering five basic principles be created. The proposed AI Code would explore and answer questions specifically around: ‘How does AI affect people in their everyday lives, and how is this likely to change?’ ‘What are the possible risks and implications of artificial intelligence? How can these be avoided?’ and ‘What are the ethical issues presented by the development and use of artificial intelligence?’.

To this end, the House of Lords Committee made a number of high-level recommendations:

  • AI should be developed for the common good and benefit of humanity and operate on principles of intelligibility and fairness.
  • Each citizen should be given the right to be educated to a level where they can flourish mentally, emotionally and economically alongside AI technology in the jobs of the future.
  • Restrictions should be placed on any AI systems attempting to diminish the data rights or privacy of individuals, families, or communities.
  • Consent is a key factor here – ensuring that people offer informed consent before their data is captured, used, or passed to third parties.
  • Bans should be implemented on any AI technology that has the potential to hurt, destroy, or deceive human beings.

While many of these recommendations make sense, they will be difficult to enforce without being implemented through regulation or legislation. And even that may not be enough if the companies creating these algorithms are not on board. In the conclusion to this series, we'll review how governments and technology companies alike are approaching the idea of ethical AI.

 

Free eBook: How to Fit Artificial Intelligence into Your Information Management Strategy

About Andrew Pery

Andrew Pery is a marketing executive with over 25 years of experience in the high technology sector focusing on content management and business process automation. Currenly Andrew is CMO of Top Image Systems.  Andrew holds a Masters of Law degree with Distinction from Northwestern University is a Certified Information Privacy Professional (CIPP/C) and a Certified Information Professional (CIP/AIIM).