Is Elon Musk's "maximum truth-seeking AI" achievable? Overcoming bias in artificial technologies is crucial for cybersecurity, but doing it could be a challenge.

7 Min Read
3d illustration silhouette of virtual human  represent scientific concept of artificial intelligence
Source: whiteMocca via Shutterstock

Concerns over bias in emerging artificial intelligence (AI) tools received a fresh airing recently when billionaire Elon Musk talked about his plans to create a "maximum truth-seeking AI" as an alternative to OpenAI's Microsoft-backed ChatGPT and Google's Bard technologies. 

In an interview with Fox News' Tucker Carlson earlier this month, Musk expressed concern over what he described as ChatGPT being trained to lie and to be politically correct, among other things. He described a so-called "TruthGPT" as a third option that would be unlikely "annihilate humans," and would offer the "best path to safety" compared with the other generative AI tools.

Musk has offered no timetable for his planned chatbot. But he has already established a new AI firm called X.AI, and has reportedly begun hiring AI staff from ChatGPT creator OpenAI as well as from Google and its parent, Alphabet.

Meanwhile, of course, AI bias affects cybersecurity risk as well. 

Surfacing a Familiar Concern

Musk's comments to Carlson echoed some of the sentiments that he and hundreds of other tech leaders, ethicists, and academicians expressed in an open letter to AI companies in March. The letter urged organizations involved in AI research and development to pause their work for at least six months, so policymakers have an opportunity to put some guardrails around the use of the technology. In making their argument, Musk and the others pointed to the potential for biased AI tools flooding "our information channels with propaganda and untruth" as one of their main concerns.

Suzannah Hicks, data strategist and scientist at the International Association of Privacy Professionals (IAPP) says the fundamental problem with bias in AI and machine learning is that it stems from human decision-making. All AI models learn from extremely large data sets and are not programmed explicitly to respond in specific ways. Typically, bias happens in the data that humans choose to input into the model.

"If biased data is entered into the model, then the output will likely contain bias as well," Hicks says. "Bias can also be introduced through data omission or by data scientists choosing a variable as a proxy for something other than what the data element is," she says. As an example, Hicks points to a machine learning algorithm that might take the number of "clicks" a user makes when browsing Netflix as a proxy for a positive indicator. "I may be clicking on movies to read their description and decide I don’t like it, but the model may misinterpret the click as an indicator of a 'like,'" she says.

Similarly, a lending service that uses AI to determine credit eligibility might end up denying a disproportionately high number of loans to people in a high-crime ZIP code if the ZIP code was part of the data set in the AI tool's learning model. "In this case, the ZIP code is being used as a proxy for human behavior, and by doing so produces a biased result," Hicks says.

Andrew Barratt, vice president at Coalfire, says his own testing of a text-to-image generation tool provided one example of the bias that exists in emerging AI technologies. When he asked the tool to generate a photorealistic image of a happy man enjoying the sun, the tool only generated Caucasian skin tones and features, though the input he provided contained no racial context. He says one concern going forward is that providers of AI platforms seeking to monetize their technologies could introduce bias into their models in a way that is favorable to advertisers or platform providers.

Whenever you monetize a service, you typically want to maximize the monetization potential with any further evolution of that service, says Krishna Vishnubhotla, vice president of product strategy at Zimperium. Often that evolution can start deviating from the original goals or evolution path — a concern that Musk vocalized about ChatGPT in his interview with Carlson. "Herein lies the issue that Elon talks about," Vishnubhotla says.

Cybersecurity Bias in Artificial Intelligence

While Musk et al didn't specifically call out the implications of AI bias on cybersecurity, that's been a topic for some time, and it's worth revisiting in the era of ChatGPT. As Aarti Borkar, a former IBM and Microsoft executive and now with a venture capital fund, noted in a groundbreaking column for Fast Company in 2019, with AI becoming a prime security tool, bias is a form of risk.

"When AI models are based on false security assumptions or unconscious biases, they do more than threaten a company’s security posture," Borkar wrote. "AI that is tuned to qualify benign or malicious network traffic based on non-security factors can miss threats, allowing them to waltz into an organization’s network. It can also over-block network traffic, barring what might be business-critical communications."

With ChatGPT being enthusiastically introduced into cybersecurity products, the risk of faulty hidden bias could contribute even more to false positives, abuse of privacy, and cyber defense that has gaping holes. And to boot, cybercriminals could also poison the AI to influence security outcomes.

"If the AI were to be hacked and manipulated to provide information that’s seemingly objective but is actually well-cloaked biased information or a distorted perspective, then the AI could become a dangerous … machine," according to the Harvard Business Review.

So the question becomes whether there really can be completely unbiased AI, and what it would take to get there.

Eliminating Bias in AI

The first step to eliminating data bias is to understand the potential for it in AI and machine learning, Hicks says. This means understanding how and what data variables get included in a model.

Many so-called "black-box" models, such as neural networks and decision trees, are designed to independently learn patterns and make decisions based on their data sets. They don't require the user or even the developer to fully understand how it might have arrived at a particular conclusion, she says. 

"AI and machine leaning rely on black-box models a great deal, because they can handle vast amounts of data and produce very accurate results," Hicks notes. "But it’s important to remember they are exactly that — black boxes — and we have no understanding of how they come to the result provided."

The authors of a World Economic Forum blog post last October argued that open source data science (OSDS) — where stakeholders collaborate in a transparent way — might be one way to counter bias in AI. Just as open source software transformed software, OSDS can open up the data and models that AI tools use, the authors said. When data and AI models are open, data scientists would have an opportunity to "identify bugs and inefficiencies, and create alternate models that prioritize various metrics for different use cases," they wrote.

The EU's Proposed AI Risk Classification Path

The European Union's proposed Artificial Intelligence Act is taking another approach. It calls for an AI classification system in which AI tools are classified based on the level of risk they present to health, safety, and an individual's fundamental rights. AI technologies that present an unacceptably high risk, such as real-time biometrics ID systems, would be banned. Those deemed as presenting limited or minimal risk, such as video games and spam filters, would be subject to some basic oversight. High-risk AI projects, such as autonomous vehicles, would be subject to rigorous testing requirements and show evidence of adherence to specific data quality standards. Generative AI tools such as ChatGPT would be subject to some of these requirements as well.

The NIST Approach

In the US, the National Institute of Standards and Technology (NIST) has recommended that stakeholders broaden the scope of where they look when searching for sources of bias in AI. In addition to machine learning processes and the data used in training AI tools, the industry should consider the societal and human factors, NIST said in a special publication on the need for standards to identify and manage bias in AI

"Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system," NIST noted. To mitigate some of the risk, NIST will develop standards for "identifying, understanding, measuring, managing, and reducing bias."

About the Author(s)

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights