If you would like to be notified when new content is posted to our website, subscribe to our email alerts.Subscribe Now
Paradoxically, the Achilles’ Heel of Psychometrics is that modern and classical test theory assume that an attribute such as Conscientiousness varies as a quantity. That is, it varies in the same way the temperature gauge on your car varies, the clock on your mobile phone, the digits on the airport baggage weight machine.
But there never has been any empirical evidence that a personality, ability, motivation, value, or indeed any psychological attribute varies as a quantity.
Psychologists, test publisher ‘psychometricians, those who train others in BPS Level A and B test use, professional bodies like the ITC, EFPA, Veritas, and APA who issue ‘best practice guidelines’ or offer assessment accreditation, never mention this one cold fact to those they address, and what inevitably follows from that fact.
But, you ask, does it matter to HR or indeed any psychometric test-user since psychometric test theory and its products have proven to be so useful in practice that such a concern is really only of interest to a few navel-gazing academics?
Well, as I have explained in a detailed paper published recently (open-access), The EFPA Test-Review Model: When Good Intentions Meet a Methodological Thought Disorder, the purposeful ignorance of test publisher psychometricians, professional society accreditation agencies have left HR and other users exposed to legal challenge where the precision of a test score is of substantive judicial interest.
But, I hear you say, we use confidence intervals, standard errors of measurement, alpha reliability, factor analysis, IRT, and our test publisher R&D experts all follow best-practice psychometric guidelines.
And what do all these techniques rely upon, including the statistical methods used to calculate all those parameters? Yes, you guessed it; that the attribute in question (e.g. “Abstract Reasoning”) varies as a quantity like length, mass, or electrical current. As a famous beer advert in NZ says: “Yeah Right”!
In practice, nobody except psychometricians interpret test scores as though they were quantitative measures of length or volume. And therein lies the stinger, if a test score is used as a cut-score, or in some other way acts as a screening device, then the legal status of its precision might now be of potential legal interest.
Of course, there are many ways of providing a robust empirical evidence base for the reliability and validation of the use of particular scores, but, to sustain legal challenge of the kind I am setting out in my article, these will not use psychometric methodology or indeed anything that invokes hypothetical true-score theory.
For those who rely upon their test publisher R&D experts to defend the use of their particular test scores ask them this simple question, and listen very carefully to what comes back:
“Show me the empirical evidence that this measure of attribute X (say “trait” emotional intelligence) varies as a quantity?”
When they have finished replying, think how that kind of response will now look in a court which requires empirical-evidence-based statements from its expert-witnesses rather than personal opinions, hand-waving, and untested assumptions.
The reader of this blog might wonder why I’m so disparaging of those who have portrayed psychometric methodology for so long as ‘best practice’. The reason is that for 20 or more years, psychometricians and those professional organizations issuing guidelines for others to follow, have known about the substantive issues published by experts in measurement, but have studiously kept that information from users.
Just take a skim of my article at what has been published/said over the years by many measurement experts, but carefully hidden from you by your local ‘expert’ psychometrician and test publisher.
You can also acquaint yourself with the precedence already set within another area of psychological ‘assessment’ that will now form the basis of the new legal challenge that may now await those needing to defend their use of psychometric test scores from aggrieved individuals/groups. The phrase “House of Cards” springs to mind.
But, on a more positive note, the world thankfully is moving on from 20th Century psychometrics. The “Next Generation” of assessments no longer conform to any of these outdated guidelines or test-theory invocations/mantras.
Cognadev has been at the forefront of these new innovations for two decades, joined now by a host of other companies now creating and selling truly innovative assessments. Finally, innovation is taking hold big-time.
My article describes some of these new innovations, the other organisations producing some of them, and lays out a new framework for constructing legally-sound evidence-bases for reliability and validation of any Next Generation and even earlier assessments. And yes, it is referred to as a framework for good reason, and not a ‘do this by the numbers’ cookbook of assumption-laden statistical test theory.
When working within a non-quantitative science, robust evidence-base construction requires careful thought, innovation, the use of methodologies suited to the properties of data at hand, and an honest realism about the status of test-scores.