Analysts at market research firm AI Multiple say AI bias generally stems from two areas:
- Incomplete data sets that fail to take a representative look at a given subject area.
- Unconscious cognitive biases introduced by those developing an algorithm, such as the assumption that someone from a specific socioeconomic class is more likely to commit a crime.
Ideally, data should provide AI with a perfect raw material to be ingested, synthesized and analyzed by sophisticated algorithms. "Training data," in particular, should help AI applications properly understand a problem and perform its assigned tasks. If businesses don't attempt to limit bias in their data as much as possible at the outset, though, the risk of artificial intelligence bias increases.
Worries over the potential for AI bias have grown as enterprise adoption of the technology has become more widespread. In fact, the 2022 AI Index produced by the Stanford Institute for Human-Centered Artificial Intelligence found that the volume of papers about ways to measure artificial intelligence bias has doubled in the past two years.
A report from the National Institute of Standards and Technology (NIST) warns that, if left unchecked, bias in AI could lead to discrimination, such as denying underrepresented groups access to education or even a place to live.