What is AI bias
and how do we
help prevent it?

Author: Shane Schick

Businesses are already successfully using artificial intelligence (AI) to enhance operations and customer experiences, such as predicting when equipment will need to be repaired or identifying fraudulent insurance claims. As the technology is applied for even higher-level decision-making, companies need to confront the risk of AI bias, and explore the degree of data objectivity they can realistically achieve to avoid it.

According to a 2022 survey, however, 54% of IT and business leaders are worried about bias in AI today. Artificial intelligence bias refers to situations where technologies such as natural language processing or machine learning produce systematic and repeatable errors.

More than a third of companies in the same survey say they have already experienced bias in AI, with consequences that range from lost revenue to legal fees.

How artificial intelligence bias develops

Analysts at market research firm AI Multiple say AI bias generally stems from two areas:

  1. Incomplete data sets that fail to take a representative look at a given subject area.
  2. Unconscious cognitive biases introduced by those developing an algorithm, such as the assumption that someone from a specific socioeconomic class is more likely to commit a crime.

Ideally, data should provide AI with a perfect raw material to be ingested, synthesized and analyzed by sophisticated algorithms. "Training data," in particular, should help AI applications properly understand a problem and perform its assigned tasks. If businesses don't attempt to limit bias in their data as much as possible at the outset, though, the risk of artificial intelligence bias increases.

Worries over the potential for AI bias have grown as enterprise adoption of the technology has become more widespread. In fact, the 2022 AI Index produced by the Stanford Institute for Human-Centered Artificial Intelligence found that the volume of papers about ways to measure artificial intelligence bias has doubled in the past two years.

A report from the National Institute of Standards and Technology (NIST) warns that, if left unchecked, bias in AI could lead to discrimination, such as denying underrepresented groups access to education or even a place to live.

Preventing artificial intelligence bias

Some efforts to combat bias in AI are putting humans at the heart of the process. For example, The New York Times recently reported on an initiative where Native American students were tagging images with indigenous terms in order to create metadata that would reduce the potential for bias in a photo recognition algorithm.

Researchers from MIT and Harvard University, meanwhile, have been applying techniques from neuroscience to develop machine learning models that could overcome biased data sets. As with similar studies, the idea is to explore how algorithms identify and classify data, and whether prejudices of some kind can creep in.

Unlike subjective data that might be based upon personal interpretation, an article in the Harvard Business Review suggests AI applications need to be developed with the lofty ambition of working to eliminate bias in data.

This kind of data is vetted and "cleansed," the article said, or is given a confidence score if its objectivity is in question.

Open-source tools, such as AI Fairness 360, are available to test for AI bias within data sets, while frameworks like Deloitte's Trustworthy AI can be used to evaluate the degree of bias early on.

AI and 5G

AI applications not only need to use the most objective and complete data sets possible, but they also would benefit greatly by running on networks optimized to deliver real-time information quickly.

The potential of 5G to deliver peak data rates, at some places where fiber is not available, helps to ensure that those relying on AI to make timely decisions can get insights at the speed their business requires.

Learn more about how 5G and edge computing can help companies set the stage for getting ahead of AI bias and improving business outcomes.

The author of this content is a paid contributor for Verizon.