As machine learning is used to market and sell, we must consider how biases in models and data can impact society. Arizona State University Professor Katina Michael joins Frederic Van Haren and Stephen Foskett to discuss the many ways in which algorithms are skewed. Even a perfect model will produce biased answers when fed input data with inherent biases. How can we test and correct this? Awareness is important, but companies and governments should take active interest in detecting bias in models and data.
Three Questions
- Frederic: When will AI be able to reliably detect when a person is lying?
- Stephen: Is it possible to create a truly unbiased AI?
- Tom Hollingsworth of Gestalt IT: Can AI ever recognize that it is biased and learn how to overcome it?
Guest
Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence at Arizona State University. Read here paper here in the Journal of Business Research. You can find more about her at KatinaMichael.com.
Hosts
- Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren.
- Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day . Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.
For your weekly dose of Utilizing AI, subscribe to our podcast on your favorite podcast app through Anchor FM and check out more Utilizing AI podcast episodes on the dedicated website, https://utilizing-ai.com/.