What 'Moneyball' should teach us about job interview bias
3 simple rules to encourage evidence-based assessments
May 14, 2020 | 5 MIN READ
The movie ‘Moneyball’ based on Michael Lewis’ book tells a real-life story of a professional baseball team and its innovative approach to player valuation. The Oakland A’s were a low-budget team forced to play against teams with significantly greater resources.
In order to compete against larger teams, Oakland A’s manager Billy Beane (played by Brad Pitt in the movie) decided to adopt a revolutionary strategy: to pick players based on their performance statistics.
Using this innovative approach, the team was able to win 20 consecutive games, something that had never happened before in the history of the sport. Even more relevant, it changed the way professional baseball teams look at data and player valuation.
The whole professional baseball market turned out to be valuing players incorrectly. Until then, player valuation relied solely on scouts’ judgment. The main problem was that scout’s judgment turned out to be heavily influenced by human bias.
Here are some of the most common biases:
Attractiveness Bias. The tendency to evaluate more favourably subjects subjects that are physically attractive.
'Similar to Me' Bias. The tendency to evaluate more favorably subjects that remind people of themselves in any way.
Anchoring Bias. The tendency to use a given focal point as a reference for further evaluation. Example: Suppose you have two candidates with equal skill levels, one being tall and another being short. If a rater is asked to evaluate them first on height and then on skill, the taller individual is likely to get a higher skills' assessment.
Confirmation Bias. The tendency to make near-instant impressions about people and then try to confirm it with all new data collected.
In the baseball players' market, the effect of human bias was so strong that simple statistical measures turned out to be better predictors of success.
When I’m finding people for a job, I’ll look for someone who doesn’t ‘look right’. If someone doesn’t ‘look right’ as a doctor, they’re almost certainly a great doctor.
Author of ‘Moneyball’
This insight raises the question: if one of the most popular markets in the world could be so poorly understood by experts, could this be affecting other markets in the same way?
The answer seems clear: yes.
Over the past decades, these techniques have started being applied to NBA drafts and even in the Israeli army (a particularly successful one, regardless of political views). These tools have proved to be more reliable predictors of future performance than human judgment.
As recruiters and interviewers, we’re nothing more than employee scouts. We interview candidates and then make a judgment on which would likely perform best on the job.
The problem is, we don’t seem to be very good at it. A comprehensive research review by Schmidt and Hunter suggests that structured interviews are faulty predictors of performance. Significantly better predictors were general work sample tests, mental ability tests and integrity tests.
Nevertheless, adoption of these methods seems to face an uphill battle. As one HBR article suggests, “managers are overconfident about their own expertise and experience, and they dislike deferring to more structured approaches that might outsource human judgment to a machine.”
It is not only naive, but plain wrong to assume we’re not subject to these biases.
I’m biased. Now what?
Unfortunately, knowing we’re biased doesn’t help us guard against bias. We need to establish the right processes in order to prevent them.
Still, the process of screening someone to join your team isn’t exclusively an objective task. Everyone has had contact with people who seem right on paper but prove impossible to work with.
How can we deal with these dueling needs of an objective assessment of skills and a subjective assessment of character?
One answer is what the team at Basecamp does. They rely on an initial screening based on objective criteria. Then, once they’re confident that the candidates who remain are able to do the job effectively, they present them to the team.
If after the team meeting phase they dislike the candidate for some reason, they just won’t hire them. What they accomplish with this strategy is to separate the objective criteria from the subjective.
There are 3 simple but effective rules you can implement to reduce screening bias and encourage an evidence-based assessment:
Conduct technical assessments. The more these technical assessments resemble the work the person will be doing, the better predictors of performance they are. General mental ability and integrity tests also seem to be useful.
Conduct structured interviews. Most job interviews are still unstructured, and thus particularly prone to bias. Structured interviews follow a written script and interviewers fill out a scorecard based on the candidate’s answers. Interviewers shouldn’t be able to see other interviewer scorecards before completing their own.
Use facts, not gut feelings to justify decisions. It’s often useful to have a final step of the process where you encourage the team to meet the top candidate and share their opinions on them. On these debrief sessions, ask the team to provide facts, details and specific examples about the candidate, not just gut feelings.
Reducing bias in job interviews is both fairer for candidates and better for employers. Take advantage of these strategies but keep in mind that it may take some time to remove bias from candidate assessments.
Hiring a tech team? Book a FREE Hiring Strategy Review Session to get a clear roadmap on how to move forward: how to define the positions, where to look for candidates and how to structure the whole recruitment process.
Prefer to start on your own? Take a look at our 1-Page Recruitment Plan to follow a proven framework and quickly hire a great team.
Founder at Remote Crew Scaling your tech team? Get in touch