1 minute read

Episode 4: Bias in ML

 

 

Did you know that not all bias in machine learning (ML) is bad? In fact, the concept of bias was first introduced into ML by Tom Mitchell in his 1980 paper, “The need for biases in learning generalizations.”  He defines learning as the ability to generalize from past experience in order to deal with new situations that are related to this experience, but not identical to it. Applying what we’ve learned from past experiences to new situations is called an inductive leap and seems to only be possible if we apply certain biases to choose one generalization about a situation over another. By inserting some types of bias in ML architecture, we give algorithms the capacity to make similar inductive leaps. 

The first AI Chair of UNESCO John Shawe-Taylor said, “Humans don’t realize how biased they are until AI reproduces the same bias.” He is referring to the most famous type of bias in ML:  human cognitive bias that slips into the training data and skews results. Cognitive bias is a systematic error in thinking that affects the decisions and judgments that people make. Melody, Nikhil, and Saurabb discuss several examples of how cognitive bias has negatively affected models and our society, from upside down YouTube videos, to an utter lack of facial recognition, to Amazon’s AI recruitment tool.  

Do you have great examples to share? Hit us up on Twitter @NoBiaS_podcast