What is Explainable AI (XAI)?
TL;DR
Technology that makes AI decision-making understandable to humans. Essential for ensuring trustworthiness and transparency.
Explainable AI (XAI): Definition & Explanation
Explainable AI (XAI) is the collective term for technologies and methods that explain why an AI model made a specific decision or prediction in human-understandable terms. Complex models like deep learning have been called 'black boxes' due to their opaque internal processes. Techniques like SHAP values, LIME, attention visualization, and feature importance reveal which input elements influenced decisions and to what degree. XAI is essential for ensuring transparency and accountability in high-impact domains like medical diagnosis, loan approvals, and judicial decisions. The EU AI Act also requires explainability.