©Photographee.eu / Fotolia
If you would like to be notified when new content is posted to our website, subscribe to our email alerts.
Subscribe Now
Well, this result has been available on the web in a research paper since 2013, yet I still see many trumpeting structured interviews and personality assessment as superior to the humble unstructured interview.
The current version of the Schmidt, Oh, and Shaffer working paper is freely available for download from the Social Science Research Network site. It updates the prediction validities of the now famous Schmidt and Hunter (1998) article, listing the validities for a variety of assessment attributes as predictors of job performance.
And yes, from the updated list of validities in Table 1 on page 65 of this article, unstructured interviews possess the same operational validity1 (.58) as structured interviews, second only to ability assessment (.65).
From page 17 of the working paper, the authors explain:
“Until recently, the available meta-analytic data indicated that the unstructured interview was less valid than the structured interview. Application of the new, more accurate method of correcting for range restriction changed that conclusion (Oh, Postlethwaite, & Schmidt, 2013). As shown in Table 1, the average operational validity of the structured and unstructured interviews is equal at .58. With the former less accurate procedure for correcting for range restriction the validity estimates were .51 for the structured interview and .38 for the unstructured interview.”
There seem to be three courses of action available to those who either sell “structured interview designs” or promote them as superior to unstructured interviews (or indeed any form of personality assessment):
Doing 1 is what some will do because such evidence simply gets in the way of current practice and profitability. And some may simply be unaware of the latest evidence.
Doing 2 is very tempting. From armchair reasoning it seems ludicrous that unstructured ‘free-for-all’ interviews could ever be superior to a well-designed structured one. Furthermore, all those corrections may make statistical sense but in this case they really are leading to completely mad results. The problem with this line of reasoning is that it leaves the entire meta-analytic evidence-base in tatters, because you just can’t pick and choose the coefficients that suit you and reject the others; they all use more or less the same statistical methods/corrections.
So, we are left doing 3. That’s a big finding to swallow, but not actually as daunting as it first seems. I suspect most organizations use a variety of non-cognitive assessments and biodata to screen-out obviously unsuitable candidates, then rely upon one or more interviews to sift through shortlists. Whereas before the new evidence I/O psychologists might have said the interview must be structured in order to maximize validity, now it’s a case of ‘please yourself’ if your primary concern is ‘validity’. However, there may still be good legal reasons for retaining a ‘structured interview’- in case a disaffected candidate chooses to claim their failure to be selected was ‘unfair’.
Either way, it seems what many have being doing for decades actually does work, far better than any shortlisted candidate selection made using personality or even assessment centre indications. Ouch!
1 From page 12:
“Unless otherwise noted in Tables 1 and 2, all validity estimates in Tables 1 and 2 are corrected for the downward bias due to measurement error in the measures of job performance and for range restriction on the selection method in incumbent samples relative to applicant populations. No correction is made for measurement error in the predictor scores, because observed scores must be used in selection; true scores are unknown and cannot be used. Observed validity estimates corrected in this manner estimate operational validities of selection methods when used to hire from applicant pools. Operational validities are also referred to as true validities”.
Schmidt, F.L., & Hunter, J.E. (1998). The Validity and Utility of Selection Methods in Personnel Psychology: practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 2, 262-274.
Schmidt, Frank L. and Oh, In‐Sue and Shaffer, Jonathan A. (2016) The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings (October 17, 2016). Fox School of Business Research Paper.