We hear scary things about AI all the time:
- “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam”
- “Amazon scraps secret AI recruiting tool that showed bias against women”
And if you’re like a lot of people, you are beginning to associate AI failure with the word “bias”.
Obviously, AI is a very advanced and complex technology. But on another level, AI systems are very simple: they do precisely what they are designed and taught to do. “Designed” refers to the math in the system. “Taught” refers to the data it is trained with. And as it turns out, both the math and the data feel the effects of bias.
We’re going to dedicate a series of posts in this blog to the topic of bias. It’s a flexible word, with many definitions. It has multiple meanings even in the context of AI.
Our first discussion will concern bias in AI algorithms, and it will remind you of Inigo Montoya. Because, in the context of algorithms and bias, “I do not think it means what you think it means.”
If waiting isn’t your thing, you can get an immediate deeper dive into the different types of AI bias by in this white paper on the subject.
If you are more of a talker, we love talking about AI training data, bias and all, drop us an email us or give us a call at (855) 410-5500.