AI algorithms are designed to learn from data, but they can inadvertently inherit biases present in the training data. How does that work? It might have serious consequences.
Akshita P. NaiduBegginer
In what manner does sample bias occur, and what are the implications for artificial intelligence systems?
Share
Sample bias happens when the data used to train or test an AI system is not representative of the population as a whole. As a result, there is little room for generalisation, biased results are reinforced, prejudices are strengthened, and AI systems lack robustness.