AI-bias
AI bias means a system systematically favors or disadvantages certain groups or outcomes, often unintentionally. It can come from historical data, how prompts are phrased (prompt), or from training data that lacks diversity.
Bias can affect recruiting, credit decisions, and content recommendations, so GDPR, documentation, and human review are critical alongside hallucination mitigation (hallucination).
Key characteristics
- Can emerge in training data, labeling, objective design, or real-world usage patterns.
- Usually appears as systematic group-level differences, not just occasional incorrect answers.
- Requires testing, documentation, and human oversight, especially in high-impact decisions.