One example of rogue AI predictions relates to curly fries and IQ. Researchers from the University of Cambridge created an algorithm based on the likes of 60,000 Facebook users. Among other things, it found that Facebook users who like curly fries have higher IQs than those who don’t.
To fuel this prediction, the algorithm measured all the current “likes” provided in Facebook. As the study gained wide press coverage, more people started to “like” curly fries. Maybe they did it to show they were smart, or simply because they are tasty.
But the new pattern of likes can change the prediction value of liking curly fries in the algorithm, leading the variable to no longer predict higher IQ. This true story shows some of the challenges of using AI in high-stakes employment decisions.
Why did the algorithm find the pattern? It’s likely not even the researchers can explain it.
Perhaps it was a bias in the first sample. For example, if the curly-fry page started based on a stand opened in Harvard Square and was frequented by local students. As more stands opened in other areas, the association with IQ disappeared.
In truth, we will never know. And that’s the problem with not knowing the basis for why one variable predicts another. Unless there is theory and research to guide a prediction, there is a chance it is a temporary fluke in the data that won’t stand the test of time. That’s why organizational psychologists urge caution when considering tests that use an AI component.
Contrast the curly-fry example with a test of sales potential that includes an Extraversion personality scale as part of its design. Extraversion can be reliably measured and has a long research history. Studies have found extraversion correlates with sales performance. Furthermore, it’s possible to show that the relationship is valid as one of several parts of a test for hiring salespeople. The designers of the test don’t give you the answer key, but they can certainly explain why it works. Without the distraction of curly fries.
In this algorithm-driven world, it might serve us well to heed some advice from Harry Potter: “Never trust anything that can think for itself if you can’t see where it keeps its brain.” The algorithmic equivalent might be “don’t trust decision rules that can’t be explained or just don’t make sense.”
Using personality tests in the workplace is simply not a “good or bad” proposition. It’s really about learning how to use personality tests in the workplace correctly. Personality tests can have a dark side when used incorrectly or over-used as the only basis for a decision. We offer our Principles to help employers leverage the right tools to make informed, fair, and data-driven decisions about their people.
This article has been provided by Fytster (https://www.fytster.com/)
Doug Reynolds, Ph.D., serves on Fytster’s Scientific Advisory Board and is Executive Vice President at Development Dimensions International (DDI), where his department includes teams of psychologists and engineers that develop software systems for assessment and learning products for use in large organizations. Doug has published and presented frequently on topics related to the intersection of I-O psychology and technology. He co-edited Next Generation Technology-Enhanced Assessment and the Handbook of Workplace Assessment, and co authored Online Recruiting and Selection. He also served as SIOP president in 2012-2013.