Ethical Use of Data for Training Machine Learning Technology - Part 3
Andrew Pery

By: Andrew Pery on January 30th, 2020

Print/Save as PDF

Ethical Use of Data for Training Machine Learning Technology - Part 3

Machine Learning  |  Artificial Intelligence (AI)

This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series.

Part 3: Regulatory Efforts in the U.S. Present a Bleak Perspective

In the United States, governmental efforts to examine AI have made far less progress as compared to the E.U. The most recent effort at the federal level, the Algorithmic Accountability Act of 2019 (S.1108) sponsored by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ)(with a parallel House bill, H.R.2231, sponsored by Representative Yvette Clark (D-NY)), seeks "To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments." The proposed law would require the Federal Trade Commission to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an "automated decision system impact assessment." However, unlike the GDPR's transparency requirements (no matter how debatable), the proposed bill would not require those assessments to be made public. Despite this lack of a transparency provision, the bill was quickly endorsed by a number of civil rights groups.

Another way in the bill is weaker than the GDPR in that there would be no private right of action, as it would instead empower state attorneys general to bring civil actions in federal court. Indeed, within the U.S., there is still much debate over providing a private right of action for individuals (versus just government agencies) to sue over unethical AI. For some, like Rashida Richardson, director of policy research at the AI Now Institute, such a private right is "a necessary one to require 'good governance efforts' by government agencies that use algorithmic decision-making tools."

Thus, while the bill has been lauded by experts as "a meaningful first step," these same experts have raised concerns over the lack of public disclosure and transparency. As well, it has been noted, the FTC has had a weak record of bringing actions to protect consumers or even enforcing prior orders against technology companies. All of this may well be for naught anyway. As of at the time of the publication of this article, the bill had not made any reported progress.

Meanwhile, experts have warned about "the Trump administration's lack of engagement with AI policy." Indeed, while Michael Kratsios, the Deputy CTO for the White House Office of Science and Technology Policy has publicly pushed for public-private collaboration on AI development, he has also made clear, strong statements against regulation: "The White House is committed to removing unnecessary regulatory burdens on emerging technologies and promoting the safe deployment of these technologies, . . . Overregulation can kill a nascent technology before it ever has the chance to fuel economic growth or create new jobs."

And, as we discussed in Part 1, the current administration's effort through HUD to "radically redefine" FHA disparate impact standards may well excuse and even encourage the use of biased AI. While the initial efforts at creating potential "safe harbors" for landlords and mortgage companies would start with HUD, some have noted how it could become the first in a wave of similar de-regulatory efforts.

This is by far from the first attempt to examine the issue a few years ago, H.R.4625, the "FUTURE of Artificial Intelligence Act of 2017", sponsored by a bipartisan group in the House and Senate, included a provision on "Supporting the unbiased development of AI." However, the provision didn't offer a solution but would have instead set up a 19-person federal advisory committee within the Commerce Department to track the growth of this technology and provide recommendations on its impact. The bill never got past the subcommittee level.

Some elements within the U.S. government are making strides towards AI fairness. For example, the Department of Defense, through its DARPA research arm, began a program titled "Explainable Artificial Intelligence (XAI)" in 2017.

Unfortunately, while some parts of the U.S. government are working towards eliminating AI bias, others are seemingly working just as hard, if not harder, in the opposite direction. For example, Reuters reported last year that Immigration Control and Enforcement (ICE) "modified a tool officers have been using since 2013 when deciding whether an immigrant should be detained or released on bond . . . the agency removed the "release" recommendation . . . The number of immigrants with no criminal history that ICE booked into detention tripled."

Fortunately, more promising progress has been made in the U.S. at the local level. Of course, one might expect that the current highlight of U.S. privacy law, the California Consumer Privacy Act (CCPA), set to take effect next year, to have some say on this issue. Yet one would be wrong on this, as the CCPA lacks an analog of the GDPR's Article 22.

Meanwhile, in Washington state, legislators introduced this February an algorithmic accountability bill (HB1655 and SB 5527) that would require an "Algorithmic accountability report" of any state agency to demonstrate the fairness and transparency of any "Automated support decision system." Local experts had come out in favor of this bill, which is currently working its way through the state legislature, though they have expressed cautions over the fact that it would not require disclosure of the code behind such decision-making systems. As well, such experts have expressed concerns over the lack of direct comment from either of the two technology giants within the state, Microsoft and Amazon. Not surprisingly, as of the time of the publication of this article, the Washington bill has been viewed as stalled.

Other, smaller-scale efforts have been more successfully underway in states such as Vermont, where an AI task force established by H.378/Act 137 has been meeting for more than a year and is set to deliver a report on January 15, 2020. Alabama established its own task force in the Summer of 2019, and a full report is expected by mid-2020. Massachusetts is considering establishing a similar commission, with its bill, H.2701, most recently reported on favorably in committee and hopefully heading towards what could be a positive vote.


Get Your Free eBook: How to Fit Artificial Intelligence into Your Information  Management Strategy


Another initially promising initiative was New York City Local Law 49 of 2018, which established a task force to examine the systems used by the City for algorithmic discrimination. New York City's task force, highly praised by the ACLU and other organizations, was able to issue an initial letter with procedural recommendations and gather many national and local organizations on board, all with a goal towards a full report by Fall of 2019, according to the Task Force website. Sadly, a recent article in Fast Company Magazine noted, "The first effort to regulate AI was a spectacular failure." As the article goes on to make the sheer scope of that failure even clearer:

"Flash forward 18 months, and the end of the process couldn't be more dissimilar from its start. The nervous energy had been replaced with exhaustion. Our optimism that we'd be able to provide an outline for the ways that the New York City government should be using automated decision systems gave way to a fatalistic belief that we may not be able to tackle a problem this big after all."

Why such pessimism? A Vox.com report shows how the task force could not complete even its most fundamental task, that is identifying even a bare list of algorithms used by the City:

"Nearly two years later, the task force largely failed to unearth much about how these systems actually work. While the City says it provided some examples of ADSs used by agencies — specifically, five — the City did not produce the full list of the automated decision systems known to be used by its agencies. . ."

Regulatory Efforts in the U.S. Present a Bleak Perspective Image 2

Dealing with AI Ethical Issues, But Not Always in a Productive Way

There has been some effort by leading tech companies to address the ethical use of data. Microsoft developed a set of 6 ethical principles that span fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. Microsoft also announced last year that it would establish an "AI and Ethics in Engineering and Research (AETHER) Committee" led by top executives.

In fact, of all of the major technology companies, Microsoft could perhaps be called out as the most promisingly progressive on AI ethics issues. Going back to the public statement by its President, Brad Smith, Microsoft has displayed an uncommon candor and openness for the industry:

"The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This, in fact, is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission. . . .

While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. . .

It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike."

Meanwhile, Facebook has recently granted $7.5 million dollars to the Technical University of Munich to establish an Institute for Ethics in AI. Amazon, for its part, has collaborated with the National Science Foundation to establish a "Program on Fairness in Artificial Intelligence" – with an anticipated $7.6 million in available grants (the deadline for which is still open, until June 25)

Other tech companies have subscribed to the Partnership on AI consortium and to its principles for "bringing together diverse, global voices to realize the promise of artificial intelligence." Yet, not all of the technology giants are on-board with such initiatives, and some go so far as to denigrate such efforts: "When you look at some of those initiatives, the wheels are falling off some of those initiatives, unfortunately, because it's just too early," quoting Dr. Peter Stanski, head of Amazon's solution architecture in ANZ. In this light, perhaps it is not surprising that after forming an ethics advisory panel in March, Google dissolved it within less than three weeks.

There are other potentially useful initiatives beyond aspirational statements. For example, Facebook stated in late 2016 that "Discriminatory advertising has no place" on its platform, and it has promised to develop a tool to "detect and automatically disable the use of ethnic affinity marketing for certain types of ads" after such ethnic affinity ads played a role in the 2016 elections. Other tech companies, such as IBM, are following suit and developing tools to detect AI bias. Yet, despite these promises, there are still reasons to remain cynical, as tests in late 2017 by ProPublica showed that it was still possible to get ads that blatantly violated the Fair Housing Act approved "within minutes" by Facebook.

Unfortunately, some technology companies, even some of the largest and most visible, have been less forthcoming. Amazon, for example, has even gone so far as to claim that it is better to keep its transparency efforts behind the scenes: "We do similar things (referring to transparency initiatives); we just don't talk that much about it. Because I don't think there's value in just saying 'hey we're doing something' – what are you doing?" Amazon's head of emerging technologies for the region Olivier Klein told Computerworld. As well, as Cathy O'Neil notes in "Weapons of Math Destruction" (on page 29), many technology companies go out of the way to hide their algorithms:

"One common justification is that the algorithm constitutes a "secret sauce" crucial to their business. It's intellectual property, and it must be defended, if need be, with legions of lawyers and lobbyists. In the case of web giants like Google, Amazon, and Facebook, these precisely tailored algorithms alone are worth hundreds of billions of dollars."

Interestingly, a group of Chinese companies developed what appears to be a comprehensive analysis of ethics and AI in which they acknowledge as the premise of the study that "AI is an extension of human intelligence, and it is also an extension of the human value system. In its development, it should include the correct consideration of the ethical values of human beings. Setting the ethical requirements of AI technology relies on the deep thinking and broad consensus of the community and the public on AI ethics."

Regulatory Efforts in the U.S. Present a Bleak Perspective Image 1

Technology Organizations appear to be Fully-Engaged in Ethical AI

While the commitment to making AI fair, ethical, and as neutral as possible is not always clear at the technology industry corporate level, there is no mistaking the clear drive by many of individuals in the industry towards unbiased AI. Industry researchers, experts, and academics all around the world have dedicated themselves towards creating a better future for AI, and for us all. Some of the leading organizations and their work include:

Partnership on AI

As mentioned previously, the Partnership on AI is an organization with over 80 partner members in 13 countries dedicated to research and discussion on key AI issues. Despite the clear technology company connections, it is worth noting that the organization makes it very clear that more than half of its members are non-profits. The second pillar of the Partnership's work is "Fair, Transparent, and Accountable AI," because, in the Partnership's words: "we need to be sensitive to the possibility that there are hidden assumptions and biases in data, and therefore in the systems built from that data — in addition to a wide range of other system choices which can be impacted by biases, assumptions, and limits." While the Partnership's work is still at an early stage, one of their first two studies was their highly-comprehensive "Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System."

AI Now Institute

Founded in 2017 and now housed at New York University, AI Now "produces interdisciplinary research on the social implications of artificial intelligence." One of the most important areas of its research into ethics and AI is its 2018 report: "Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability." As described in AI Now's update to this report, these Assessments (AIAs) have four goals:

  1. "Respect the public's right to know which systems impact their lives and how they do so by publicly listing and describing algorithmic systems used to make significant decisions affecting identifiable individuals or groups, including their purpose, reach, and potential public impact;
  2. Ensure greater accountability of algorithmic systems by providing a meaningful and ongoing opportunity for external researchers to review, audit, and assess these systems using methods that allow them to identify and detect problems;
  3. Increase public agencies' internal expertise and capacity to evaluate the systems they procure, so that they can anticipate issues that might raise concerns, such as disparate impacts or due process violations; and
  4. Ensure that the public has a meaningful opportunity to respond to and, if necessary, dispute an agency's approach to algorithmic accountability. . ."

Another area of focus for AI Now has been on ameliorating the backend issue of lack of diversity at AI/tech firms. In short, as AI Now, and others, have found, when only white men work on AI, they tend to miss issues that may impact minorities. On a personal note, the authors were privileged to have been able to attend AI Now's October 2019 annual Symposium. The Symposium for 2019 presented the stories of those who, having found themselves the potential victims of biased AI, chose instead to fight back. It was an inspiring and uplifting event, and much needed as well.

Center for Democracy & Technology (CDT)

A Washington DC and E.U. non-profit that describes itself as "champion of global online civil liberties and human rights, driving policy outcomes that keep the internet open, innovative, and free." One of the CDT's strongest initiatives in this area has been the creation of an interactive online guide towards how to envision, build, and test applications that are free of bias.

Fairness, Accountability, and Transparency in Machine Learning

This group describes itself as "Bringing together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning," and it includes some of the top experts in this area. The organization produces annual events, which are more focused on technical issues than policy. However, the breadth and depth of the papers generated for their annual events is nothing short of astounding.

Law school professors, such as the University of Maryland's Frank Pasquale, author of "The Black Box Society: The Secret Algorithms That Control Money and Information," have also written extensively on the dangers in this area. Professor Pasquale testified on these issues and others in 2017 before the United States House of Representatives Committee on Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection.

Balancing the Utility of AI and Ethical Bias

A recent survey by Deloitte highlighted the dilemma associated with the utility of AI on the one hand and its potential risks:

"A growing number of companies see AI as critical to their future. But concerns about possible misuse of the technology are on the rise. 76 percent of executives said they expected AI to 'substantially transform' their companies within three years, while about a third of respondents named ethical risks as one of the top three concerns about the technology."

Ethical use of AI ought not to be considered only as a legal and moral obligation but as a business imperative. It makes good business sense to be transparent in the application of AI, because it fosters trust and engenders brand loyalty:

"Companies need to make ethics and values a focus of AI development. Some reasons for this are obvious: Three-fourths of consumers today say they won't buy from unethical companies, while 86% say they're more loyal to ethical companies, according to the 2019 Edelman Trust Barometer."

As regulations tend to lag technological innovation, the Deloitte report suggests that industry should take a proactive role in fostering transparency and accountability relating to the applications of AI technologies and how they impact privacy and security rights:

"Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Companies should consider setting up a dedicated AI governance and an advisory committee including cross-functional leaders and external advisers that would engage with stakeholders, including multi-stakeholder working groups, and establish and oversee the governance of AI-enabled solutions including their design, development, deployment, and use.

 

Free eBook: How to Fit Artificial Intelligence into Your Information Management Strategy

About Andrew Pery

Andrew Pery is a marketing executive with over 25 years of experience in the high technology sector focusing on content management and business process automation. Currenly Andrew is CMO of Top Image Systems.  Andrew holds a Masters of Law degree with Distinction from Northwestern University is a Certified Information Privacy Professional (CIPP/C) and a Certified Information Professional (CIP/AIIM).