On January 28, 2019, the Philadelphia Inquirer posted a news article to their website.
Employers use algorithms to rate job interviewees. A new Illinois law gives candidates rights. By Abdel Jimenez of the The Philadelphia Inquirer.
My attention was drawn to the article because of my interest in workplace AI and machine learning. I am fascinated with how the technologies work and want to know more about how it works in different applications. Unfortunately, the article itself is about the rights the law protects and does very little to report the AI machine learning used to rate job interviewees, even though interview-rating AI is the opening concept of the article’s title.
While Abdel Jimenez reports the human interest scoop about rights violations and federal inquiries, she misses the opportunity to investigate what is really going on in this story. Notably, a Utah software company has created an algorithm to rate job interviews. I would love it if a reporter actually investigated that fascinating application of machine learning technology. Utah’s HireVue is in fact mentioned in the article; yet, it would be an interesting development in news media to actually report what is going on with AI at companies like HireVue, rather than what is going on with people who resist AI development.
The ship has already sailed. The bus is leaving. News media are inciting AI angst but the enterprise machine is already well on the way to implementing AI in the workplace. Whether it is right or wrong I don’t care; I want to know how it is being done.
Jason Lawrence Researched HireVue
I have made a new habit. If the news media won’t report about AI in the workplace, then I will have to figure it out on my own. I went to HireVue’s website. What did I discover? Jimenez did very little to understand HireVue or their technology.
For instance, Jimenez states the Electronic Privacy Information Center filed a complaint with the Trade Commission about how HireVue falsely denies it uses facial recognition software. She states the HireVue CEO points out the claim has no merit. Jimenez doesn’t investigate the merit.
Another example of Jimenez’s failure to report on the AI itself is claims about gender and racial bias. While the gender and racial bias has become the standard canon fodder of the news media, it really is misplaced in the case of HireVue. In fact, one of HireVue’s missions is to enable companies to bypass the human biases already embedded in the DNA of human interviews. Jimenez quoted all the right people about gender and racial bias in AI software that rates interviews; Jimenez didn’t report anything about how the calculus of AI software at HireVue doesn’t even calculate such things.
AI Interview Raters Work
The way interview selection works now.
Nowadays, a job may receive hundreds of applications. A human takes 3-10 seconds to screen a single resume and that is after the HR portal filters through the available applications. A human narrows down the pool to a handful of qualified people who will receive an interview. Of that handful, perhaps two candidates will go to a second interview. After the second interview, the top candidate is given an offer.
The way interview selection works with HireVue.
What if there was a way to interview more candidates than a single human could possibly interview in a busy 40-hour week? What if the handful of interviewees were selected more accurately than a human’s flawed 3-10 second screening? What would happen if instead of 3-10 second resume screenings there were AI-rated interviews for every candidate that made it through the HR portal’s filter?
A Difference that Matters
HireVue provides case studies of employers, like the Red Sox and Urban Outfitters, who offer their own answers to those questions. Put plainly, there is no way that a handful of top candidates could be selected for human interviews; if a company wishes to skip human interviews then there HireVue believes there is no way a human could be as unbiased or as accurate as their AI.
In the cases of facial recognition and gender/racial biases, HireVue just doesn’t go there. Their interview-rater algorithms are based on values set by leading psychologists and the requirements set by hiring managers, rather than judging candidates based on university GPA on a resume or whether someone blinks too much in an interview. The interview-rater matches candidates against successful employees and whether the candidate explains themselves in a way that matches a company’s vision. It is the human that worries about gender, race, blinking eyes, or whether someone leaves a good first impression.
Suddenly, unqualified people who know how to interview very well are not selected over qualified people who don’t interview well.
The Real Scoop News Media are Missing
How did HireVue train their machine learning software so that it is ready to rate the 200 applicants–a really insignificant data set for machine learning?
Did Abdel Jimenez subject herself to an interview for her own job and review her results? Did she consider the value she adds to her news agency and whether the AI missed that? Or how the AI successfully predicted her value? How?
Companies already use things like personality tests to screen candidates, where employers select the kinds of personalities they want to hire. Is HireVue’s AI an improvement on personality tests, a better solution than personality tests, or just another iteration of the mistakes made by personality testing?
HireVue has testing data for the accuracy of their technology against race, gender, age, and disability discrimination. They have data about the accuracy of their machine against the interview accuracy of hiring managers. How does HireVue calculate the success of their technology against these data? How did they complete that testing?
Jason Lawrence, M.S., Ph.D.
Make Communication Part of the Product