What is Bias in AI?
TL;DR
Systematic prejudice in AI outputs caused by biased training data or model design. Leads to unfair results across gender, race, age, and other attributes.
Bias in AI: Definition & Explanation
Bias in AI refers to the tendency of AI systems to produce unfair or unjust judgments and outputs regarding specific attributes such as gender, race, age, religion, and socioeconomic background. Primary causes include biased training data (data bias), algorithm design choices (algorithmic bias), and evaluation metric selection (measurement bias). Documented cases include hiring AI unfairly scoring certain genders and image generation AI depicting specific professions with specific racial features. Countermeasures include ensuring training data diversity, introducing fairness metrics, using bias detection tools, regular audits, and diverse development teams.