I’m not your typical writer for this website in that I’m an employee, not a recruiter. Sometimes, however, it can be good to consider your practices from our target’s perspective. So let me tell you a little something about myself: on a scale of 1-30, I’m about an 18. But that’s before you factor in things like my level of “fame” or “social class.” Those qualities could earn others as much as 10 points. I’m pretty sure I’d only rack up two or three.
If you are wondering what the *(&) I’m talking about, imagine how thousands of journalism professionals felt reading about the candidate-scoring spreadsheet used by investigative news start-up, The Markup.
While the CEO has since said that evaluating “critical of technology’s societal effects” was not to identify candidates who would take the most adversarial stance, the fact that scores were tallied up demonstrates the contrary. In this case, the ideal candidate would be an experienced journalist who had a facility with technology and could articulate it clearly for the reader. They would also be inherently skeptical of technology’s effect on society, have come from a “privileged” background and be internet “famous.”
I can’t say for certain if this is the case. But it begs some serious questions about quantification and our preconceived notions of the perfect hire.
The executive team at The Markup didn’t invent this sort of evaluation practice. Interview scorecards are used by hiring teams to evaluate a candidate in the interview process. They allow each interviewer to rank potential hires along a given set of criteria so that they can compare ratings of the candidate pool and weed out weaker candidates. In theory, creating an interview scorecard forces hiring managers and recruiters to identify the skills and attributes needed to be successful in a role.
It might be easy to look at The Markup’s spreadsheet as an outlier, as an amateurish attempt to superimpose hiring rigor over a very specific journalistic mission. However, I fear it demonstrates an all-to-common desire to presage the “right cultural fit” or worse, to reduce candidates to a manageable set of digits in order to dump as many into the bin as possible.
I understand the desire to codify and simplify. It is downright difficult to evaluate each individual, well, individually.
In my industry (digital media), there are those who look at hyper-targeted advertising as the best thing to come along since the web opened its doors for business. However, targeting ads has a very dark side. It relies upon the assumption that we can effectively deduce exactly who is going to buy a product before we advertise it. The reality is that much of the money spent advertising in this way eliminates large swaths of would-be buyers.
The same goes for candidates. While quantifying certain truly indispensable skills may be useful as part of the hiring process, the perfect hire is likely to fall well outside a tidy equation. And our quest for the predetermined perfect fit may actually hurt more than it helps.
Time and again, research has shown that diversity makes businesses stronger and more successful. A recent Harvard Business Review study showed, for example, that among VC firms, the more similar the investment partners, the lower their investments’ performance. That’s right: shared educational backgrounds, ethnicity, and more actually reduced decision quality, strategy, and overall performance.
Yet we still believe that there is one archetypal candidate who will fit in with our group and take us in for the win. Not so.
To achieve great things, we have to have our ideas tested, pushed, stretched, and yes — sometimes shot down. Maybe this will be done by someone who has almost no experience, or didn’t go to “the right school,” or actually sees a whole lot of holes in our business plan.
So, if you must keep your scorecards, recognise that there’s complexity in the calculus of team building. And that over quantification can make you blind to the most important variables.