Part 2 in our Re-thinking Recruitment series, a practitioner’s guide to recruitment written from an academic perspective by Dexter Davies Smith.
In Part 1 of the series, we began our review of recruitment by focusing on the true costs involved in recruitment. The difference in value between a high and low performing employee can be astronomical and getting this wrong can be one of the most costly investments an organisation makes.
In the coming editions of this series, we will take different methods for recruitment and identify their strengths and weaknesses. Before we can do so, we need to explore some concepts that we can use to evaluate the worth of a selection method. We conclude by exploring some reasons why human judgment often proves to be so fallible when predicting how well a candidate will go on to perform as an employee.
Don’t Judge a Book by its Cover – Face Vs Predictive Validity
During the process of examining a group of candidates and identifying which one would be the top performing employee, it is necessary to make assumptions from the information available. Different selection methods are really just different ways of gathering information for predicting performance. A very common example would be:
BigWigs PLC have a new and exciting graduate role. Applicants for this role must have:
- Graduated from University with at least a 2.1.
This requirement makes a key assumption: we feel that people with a 2.1 (or better) would outperform people without a 2.1 – and it is easy to see how that assumption has been made. People with a 2.1 have demonstrated a level of academic ability, and we therefore assume this would translate to higher performance in this role. This is called face validity – on the surface of it, a selection method that would seem to be a good predictor of job performance and, therefore, intuitively makes sense. Face validity is usually the reason behind why people use the selection methods they do.
Academic research then aims to answer the inevitable question that should follow this assumption; does employing people with a certain academic qualification lead to higher performing staff? The ultimate goal is to be able to say how much an employee’s performance can be predicted by the recruitment method – this is called predictive validity. Predictive validity is often buried in academic papers, hidden in a matrix of statistics that leave the most ardent fan of psychology a little lost and bewildered.
So, there are two key differences between face validity and predictive validity:
- Face validity is immediately accessible; it is just your gut reaction to how good you think a method is, whilst predictive validity is hidden in a matrix of statistics buried away in academic papers.
- Face validity has no relationship with how well a recruitment method will identify candidates that are likely to be top performers, where-as predictive validity, by its nature, is the most accurate indication of how good a selection method is at doing so.
This is not to say face validity has no value – if a method lacks face validity, candidates may lose interest or fail to see the value in the recruitment methods you are subjecting to them.
The Dark Side of Recruitment
Inevitably, any method of recruitment will, to some degree, have an aspect of adverse impact. This means that the method used is un-fairly biased against one or more groups of people. In our example, by requiring a 2.1, this method excludes anyone who has not been in a position to attend University. Attending University is an expensive investment and certain groups may have various socio-economic factors that prevent them attending.
Adverse impact is unavoidable and so we must be aware of it. It is imperative to consider the way in which the recruitment processes we choose to use are susceptible to excluding certain groups. Moreover, we should make an active effort to minimise and take these biases into account. Ultimately, we should have a justification for the recruitment process we use beyond voluntary ignorance of the negatives.
Don’t Trust Your Judgment of the Book Full Stop!
Being aware of the biases involved with adverse impact leads us nicely into why humans are so poor at judging the likely performance of others in the future. In the standard recruitment process, there will almost always be an interview, and intuitively (wave hello to high face validity) we assume we can judge someone’s character and ability from this. We believe what we see and make a number of assumptions from that meeting. The question is, how reliable is that judgment?
Now let’s look at some examples where human judgment may be less than perfect. Given this introduction, I’m assuming you may be preparing yourself for some ‘all is not as it seems’ examples, so please listen to your initial reaction, not your conscious thought warning you that it’s a trap.
How many prongs does this object have? Are you sure?
Are these lines straight? Get a ruler out and check.
These are simple demonstrations but, nonetheless, I hope they highlight an important issue – what we believe we are seeing and what is really there is not always the same thing. We are very prone to being misled, even if we are consciously looking out for it.
The next key example of the fallacy in our judgment comes from the, now famous, Gorilla experiment. For those who haven’t seen it, press play below and count how many times the white team passes the ball. For those who have seen the original, this video is an updated version with a twist. See how you fare now you know the basic premise.
Still trust how reliable your judgment is? Well on the bright side we can at least rely on our judgement to be objective when selecting someone for a job, right? We certainly wouldn’t be affected by their physical attractiveness? Whether they wear glasses? Their nationality? Gender? Ethnicity? Whether they look like us? According to the research that has been conducted we are susceptible to all kind of biases and unconscious stereotypes, it is an innate quality of being human.
So, it becomes clear that we have an unreliable system for judgment that is easily mislead and misses key, obvious pieces of information. This system is also pre-loaded to have preferences for all manner of things which are, in reality, not particularly useful for judging how well someone will perform as an employee. At least we can rely on the candidate to show their true-self in the interview, it would be a nightmare if we made this judgment based on someone presenting a more or less capable version of themselves? Oh dear…
By the way, for those who wanted an answer, there are clearly three prongs on the trident. No wait, two prongs. Or three? Try drawing it, if you can.
Part 3 in the Re-thinking Recruitment series, in which we will review application forms, CVs and interviews, will be coming soon