By Maretha Prinsloo on August 15, 2019
This article on conventional psychometrics is the second in a four-part series on cognitive assessment techniques aimed at the selection, placement and development of people in the educational and work environments. The first part entailed a discussion of simulation exercises; the third is an explanation of facilitated and interpreted methodologies and the fourth a comparative summary of all these methodologies.
Here the focus will be on Ability testing, Personality questionnaires and volume-based screening tests.
2. Conventional Psychometrics
2.1 Ability or IQ tests
Ability or Intellectual Quotient (IQ) testing is a product of Differential Psychology. First devised early in the 20th century, this school of thought in Psychology was aimed at revealing the structure of the intellect.
IQ tests measure intellectual “ability” by using highly structured item content which primarily requires the linear-causal application of logical-analytical thinking processes to solve problems. Convergent reasoning is normally involved, which means that the aim is to come up with one correct answer to a problem. Ability testing usually capitalises on a particular content domain (e.g. verbal-linguistic, logical-mathematical, visual-spatial or other domains). IQ tests are mostly timed and both speed and power are assumed to indicate intellectual ability.
The results Intelligence Quotient (IQ) tests are meant to reflect a candidate’s intellectual capacity which is assumed to be largely genetically determined and environmentally developed. Most IQ tests measure intellectual functioning according to the same recipes. No wonder then, that their results generally correlate, which has resulted in the concept of “g” or general ability. Two second order factors also tend to emerge from correlations between various sets of intelligence test results. Cattell referred to these as crystallised and fluid intelligence. Whereas crystallised intelligence seems to reflect knowledge and skills obtained from learning exposures within the context of certain socio-economic and educational environments, fluid intelligence refers to a person’s capacity to deal with new and unfamiliar situations, also referred to as learning potential and cognitive adaptability. Crystallised intelligence is mostly assessed through verbal item content such as vocabulary, general knowledge and number-related concepts, whereas fluid intelligence, which is regarded as largely genetically determined, is often assessed by means of non-verbal, visual-spatial pattern recognition or other abstract item contents.
A proliferation of IQ tests have emerged since the early 1900’s, largely from within the military context during the world wars as well as the educational context. Examples of IQ tests that were developed are:
- The Stanford Binet Intelligence Scales which involve the use of both verbal and non-verbal contents to measure fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory.
- Thurstone’s Primary Mental Abilities test which was designed to measure seven intellectual skills, namely: Verbal comprehension, Numerical ability, Memory, Spatial relations, Word fluency, Inductive reasoning and Perceptual speed contents.
- The Watson-Glaser Critical Thinking Appraisal which measures critical reasoning, and focuses on Logical arguments, Typical assumptions, Analyses, Inferences, Deductions, and Interpretation of information. It is administered under time constraints.
- Assessments similar to the Watson-Glaser, include: the GMAT (General Management Abilities Test), the SHL Critical Reasoning Test Battery (CRTB), and the Cornell Critical Thinking assessment, to mention but a few of the approximately 5000 conventional aptitude and ability tests available on the market.
- The Cattell Culture Fair III test is aimed at removing discriminatory item content linked to socio-cultural and environmental factors so as not to disadvantage certain groups.
- The Differential Ability Scales (DAS) or Differential Aptitude Test (DAT) also consist of a variety of verbal and non-verbal ability subtests which measure intelligence.
- As does the facilitated Wechsler Adult Intelligence Scale (WAIS) and WISC (for children) tests which measure a number of verbal and nonverbal skills. Here, the interaction between the test candidate and facilitator also provides a useful source of information on a person’s intellectual functioning. The Verbal subtests include: Information, Vocabulary, Similarities, Comprehension, Arithmetic, Digit span, and Letter-number sequencing. The Non-verbal subtests are: Picture arrangement, Block design, Picture completion, Digit-symbol coding, Symbol search, Matrix reasoning. The WAIS is also timed. It is quite time consuming to administer.
- The Raven’s (Advanced) Progressive Matrices and Matrigma tests as well as a host of other IQ tests, are matrix or spatial reasoning style assessments which capitalise on pattern recognition (a domain specific skill), to measure General Mental Ability (GMA) or generic conceptual ability as well as Cattell’s concept of fluid ability (or intellectual potential). Critical research evaluations of these measurement claims regarding the depiction of GMA and fluid ability through spatial reasoning, suggest them to be unsound. Figural analysis items can, however, safely be regarded as measuring spatial abilities.
The benefits of IQ tests include their reasonable predictive validity in the work environment which outperforms that of personality test results; they are quick and easy to administer; and they indicate educational sophistication as well as the quality of acquired educational knowledge and skills. The IQ test industry is also a multi-billion dollar industry and thus most viable from a commercial perspective. Both test providers and corporate users, tend to regard IQ testing as valuable.
Although IQ test scores are generally found to correlate with work performance at a 0.3 to a 0.4 level, which is regarded as significant, IQ test scores of 2 or more standard deviations (SDs) above average, do not indicate higher levels of performance than IQ test scores of one SD above average. It should also be pointed out that IQ test results seem to best predict intellectual performance in structured, factual, detailed, linear-causal, and knowledge driven environments such as in the case with school grades and operational or technical-specialist work contexts. IQ test results are less effective in predicting performance in environments that require intuitive judgements as well as creative and strategic thinking or contexts characterised by vague, fuzzy, complex, interactive, and dynamic information. It can thus be concluded that IQ test results can be used to predict performance risk, but not necessarily high performance in strategic environments.
Unfortunately, IQ testing methodologies are not theoretically based, which fundamentally reduces their construct validity. Besides, the ability test methodologies are flawed given the weak theoretical assumptions involved, such as the reductionist notion of intelligence as hereditary or genetically determined and thus static, while overlooking the potential impact of cognitive modifiability. Another example includes the assumption that spatial reasoning test items can be used to measure general conceptual capability.
Other weaknesses of the IQ testing paradigm include the defining impact of inadequate statistical techniques such as averaging, correlations and factor analyses which has, for example, resulted in the concept of “g” or general ability. The dis-embedded nature of IQ test items also involves cross-cultural loading which renders them culturally biased. In addition, the simultaneous measurement of intellectual speed and power – which are actually separate constructs as far as intellectual functioning is involved, further reduce the validity of ability test results. Also, the inadequate item content of IQ tests fails to represent the intellectual skills that they purport to predict, thereby reducing the practical utility of IQ tests, especially in strategic work environments. The representation of something as complex as human cognition by the use of single scores is thus an unfortunate and reductionist practice.
Questionnaire based techniques which rely on the test candidate’s self-insight and self-report are notoriously invalid for the assessment of cognitive capability, although these techniques may indicate a person’s cognitive preferences.
- The trait dimensions measured by the Sixteen Personality Factors (16PF) test of Cattell, and its derivatives, such as SHL’s Occupational Personality Questionnaire (OPQ) and Psytech’s Fifteen Factor Questionnaire (15FQ), in addition to personality factors, also report on factors indicating cognitive preferences. For example, the OPQ indicates scores on tendencies described as Data rational, Conceptual, Detail conscious, Innovative, Critical, Decisive, Forward planning and Artistic, all of which relate to cognitive preferences, but not necessarily to cognitive capability. The 15FQ measures preferences on dimensions such as Concrete versus Abstract information (factor M) and Low intelligence versus High intelligence (Factor β) which indicates the candidate’s confidence in their own intellectual capability. The dimension of Openness to ideas which is derived from Factor Q1, or the Conventional versus Radical continuum, indicate innovative and unconventional tendencies. To some extent, these factors indicative of cognitive preferences seem to correlate with information processing scores as measured by the CPP cognitive styles which also to some extent reflect cognitive preferences more so than capability scores.
- The measurement of personality typologies as opposed to traits, for example by the Myers Briggs Type Indicator (MBTI) as provided by the Consulting Psychologist Press, also indicates cognitive preferences for factual (S) versus ideas oriented (N) information; for decision making based on logical-objective (T) versus more complex personal and emotional (feelings) considerations (F); and for structured (J) versus open-ended, process (P) oriented approaches to goal achievement. The relationship between MBTI dimensions and cognitive functioning as measured by the CPP, is relatively complex. It has, for example, been found in pilot studies that the relationship between cognition (as measured by the CPP) and personality type (as measured by the MBTI) depends on the person’s level of cognitive complexity, or suitability to a particular SST environment. In other words, it seems that those with an Intuitive cognitive style as indicated by the CPP, who are best suited to functioning in operational environments, deal with sensory data in an intuitive manner whereas those who are best suited to strategic work, tend to intuitively focus on abstract information.
Whereas personality questionnaires may reflect cognitive preferences, the use of personality questionnaires to assess cognitive capability is far from ideal. Most test subjects, especially those who lack self-awareness and introspective capacity, are not in a position to provide objective information on their own intellectual functioning. Questionnaire-based items are also relatively transparent, which allows for the manipulation of the test results.
2.3 Volume based screening tests
Light versions of conventional psychometric tests and games are often provided online or delivered on mobile for purposes of mass recruitment and organisational audits. In most cases volume assessments mimic the format and premises of traditional ability and personality tests and therefore show the same shortcomings as conventional Psychometrics and most superficial gaming techniques do.
The constructs measured by volume-based assessments may include those of intelligence, personality, emotional intelligence, interest and specific competencies such as sales skills. The results of these assessments are often integrated by the system to inform certain work-related competencies.
Literally thousands such tests and systems are available online – the quality of which seem to vary significantly. Cognadev, for example, offers the Cliquidity system, consisting of a number of quick and adaptive assessments of personality, motivation, cognitive complexity, vocational interest, performance risk and entrepreneurial orientation. The Cliquidity Adaptive Reasoning Assessment (CARA) capitalises on abstract item content to measure the level of complexity at which an individual shows the potential and preference to work. The brief report indicates the person’s logical reasoning, learning agility and speed capabilities. Integrated competency profiles compiled from various types of assessment results can also be provided by the Cliquidity system.
Such quick and light tests aimed at the self-assessment of candidates, with automated scoring and reporting functionalities using layman’s terminology, are particularly useful in saving the time of HR practitioners in that it enables the screening of large volumes of people on social media, job applicants and employees. Quality volume assessment systems also offer the benefit of creating and maintaining virtual talent pools where competency searches can inform person-job matching. Such systems are not only based on psychological characteristics, but also incorporate the educational and employment profiles of individuals. Candidates with online assessment and biographical profiles also obtain the benefit of (initial) anonymous exposure to potential employers as well as other opportunities.