By Paul Barrett on May 25, 2017
A common perception of psychometric assessments is that the longer they are, the more likely candidates will stop attending to the assessment and respond in a more careless fashion. Some may even choose not to complete the assessment and quit being a candidate altogether.
Cognadev has an especial interest in this issue as its Cognitive Process Profile and Learning Orientation Index can each take up 1 to 3 hours to complete (candidates can take breaks throughout). Of the 300,000 candidates we have assessed, we have never received any complaints about the length of the CPP from them. But, unlike the passive self-report questionnaire response format, the CPP and LOI are performance-based dynamically interactive assessments. In a very real sense, these kinds of assessment grab the attention of candidates because they are so immersive/interactive.
But even so, the conventional wisdom persists among some: that the longer an assessment, the less valid might be its results. However, that view and “conventional wisdom” just took a hammering from two recent publications:
The abstract to the first article by Speer, King, & Grossenbacher (2016) is:
“This study investigated how the length of pre-employment assessments affects applicant reactions to the testing process and organization. Using a between-subjects design, participants took one of four assessments (short personality, long personality, short cognitive, long cognitive) where they were incentivized to perform well, followed by a survey assessing perceptions of procedural justice, organizational attractiveness, and likelihood of accepting a job offer. Longer tests did not worsen applicant reactions for either personality or cognitive tests, and in fact individuals taking a longer cognitive assessment reported more favorable applicant reactions”.
The abstract to the second article by Hardy III, Gibson, Sloan, & Carr (2017) is:
“Conventional wisdom suggests that assessment length is positively related to the rate at which applicants opt out of the assessment phase. However, restricting assessment length can negatively impact the utility of a selection system by reducing the reliability of its construct scores and constraining coverage of the relevant criterion domain. Given the costly nature of these trade-offs, is it better for managers to prioritize (a) shortening assessments to reduce applicant attrition rates or (b) ensuring optimal reliability and validity of their assessment scores? In the present study, we use data from 222,772 job-seekers nested within 69 selection systems to challenge the popular notion that selection system length predicts applicant attrition behavior. Specifically, we argue that the majority of applicant attrition occurs very early in the assessment phase and that attrition risk decreases, not increases, as a function of time spent in assessment. Our findings supported these predictions, revealing that the majority of applicants who quit assessments did so within the first 20 min of the assessment phase. Consequently, selection system length did not predict rates of applicant attrition. In fact, when controlling for observed system length and various job characteristics, we found that systems providing more conservative (i.e., longer) estimates of assessment length produced lower overall attrition rates. Collectively, these findings suggest that efforts to curtail applicant attrition by shortening assessment length may be misguided.”
These two studies indicate that contrary to expectations, candidates are perceiving longer assessments as being more valid, or at least what they consider is a serious attempt at assessing their psychological functioning.
It’s ironic; in proposing to enhance validity and reduce attrition, test publishers have been seeking ways to shorten administration times by producing short-form versions of their assessments, yet what they might have achieved is exactly the opposite – lower validity and increased attrition!
OK – we have to be realistic here, these results come from just two studies.
However, there is a complex trade-off at work here; the perceived utility trade-off by a candidate i.e. the economic need or personal-motivation to acquire the job vs the frustration caused by completing long assessments or multiple assessments in a long assessment process, coupled with the perception of the relevance of the assessments to the job-role. The duration per se is a factor, but what’s going to be more important is the motivation of a candidate to be selected for the job-role. If that’s weak, then the longer an assessment takes, the more it will just become annoying. Likewise, if it’s not clear to the candidate why an assessment is relevant to the job-role, then the longer it takes the more that frustration will build up.
But, getting that information balance ‘just right’ is also difficult, because of the ATIC phenomenon! ATIC stands for “Ability To Identify Criteria”. A term introduced in 2011 by Kleinmann, Ingold, Lievens, Jansen, Melchers, & König, C.J. From a 2012 constructive replication, the last sentence of the article sums up the problem:
“Individuals with the ability to discern critical performance criteria [ATIC] are also better at providing an ideal-employee profile on a personality inventory and at behaving in a way consistent with this profile in a performance situation.” p. 297.
This ATIC phenomenon is also at work in Interviews (another study reported in 2015).
You might feel this is a ‘damned if I do, damned if I don’t” scenario, and you would be right!
If you “tell-all” prior to a long assessment, you may lose the validity of finding those candidates who possess the ATIC ability to infer the link between what they are doing and the job-role requirements because now everybody knows in advance. Then again, if the candidates don’t see the relevance of the long assessment to the job-role, you may lose validity as some ‘desirable’ candidates decide to quit the assessment and application process altogether.
Clearly, how you introduce a long assessment to a candidate really does matter!
Hardy III, J.H., Gibson, C., Sloan, M., & Carr, A. (2017). Are applicants more likely to quit longer assessments? Examining the effect of assessment length on applicant attrition behavior. Journal of Applied Psychology, In Press, 1-12.
Ingold, P.V., Kleinmann, M., König, C.J., Melchers, K.G., & van Iddekinge, C.H. (2015). Why do situational interviews predict job performance? The role of interviewees’ ability to identify criteria. Journal of Business and Psychology, 30, 2, 387-398.
Klehe, U., Kleinmann, M., Hartstein, T., Melchers, K.G., König, C.J., Heslin, P.A., & Lievens, F. (2012). Responding to personality tests in a selection context: The role of the ability to identify criteria and the ideal-employee factor. Human Performance, 25, 4, 273-302.
Kleinmann, M., Ingold, P.V., Lievens, F., Jansen, A., Melchers, K.G., & König, C.J. (2011). A different look at why selection procedures work : The role of candidates’ ability to identify criteria. Organizational Psychology Review, 1, 2, 128-146.
Speer, A.B., King, B.S., & Grossenbacher, M. (2016). Applicant reactions as a function of test length: Is there reason to fret over using longer tests? Journal of Personnel Psychology, 15, 1, 15-24.