By Maretha Prinsloo on August 15, 2019
Based on decades of research findings regarding the predictive validity of intelligence tests in the work environment, most Human Resource (HR) practitioners regard intellectual functioning as the best psychometric indicator of work performance. Intellectual functioning largely refers to a person’s cognitive preferences and capabilities in terms of learning, problem solving, understanding, conceptualisation, decision making and responding. Organisations thus tend to opt for the use of cognitive assessments for the selection and placement of employees.
In this series of four articles, of which this is the first, various methodological approaches to the assessment of intellectual functioning are reviewed and summarised. The articles also touch on the specific theoretical assumptions which underlie the assessment methodologies of the various schools of thought within intelligence research.
The most common approaches capitalised on in this regard are the Differential, Information Processing, Developmental, Contextualist and the Neurosciences paradigms. Whereas the most commonly applied test methodology, namely “ability testing” which is associated with the Differential approach in Psychology, focuses on domain-specific knowledge and the application of logical-analytical skills; the Information Processing approach tracks dynamic thinking processes; the Developmental approach measures the acquisition of age-related intellectual and behavioural skill; the Contextualist position analyses context- and culture specific cognitive competencies; and the Neurosciences approaches focus on brain activity.
Differential psychology assumes that “ability” reflects domain specific skills which results from a combination of hereditary and education factors. It is referred to as the “what” of intelligence. The Information Processing approach, on the other hand, concentrates on the “how” of thinking processes; and tends to externalise and track cognitive “preferences and capabilities” through means such as algorithmically driven expert systems. It is thus subject-dependent and largely transcends domain specific content. The Contextualist paradigm, which underlies the assessment of cognitive competencies, emphasise the “where” and “when” of intelligence within specific cultural- and environmental settings; and the Neurosciences approach capitalises on advanced measurement techniques including Transcranial Magnetic Stimulation (TMS), Electroencephalography (EEG), Average Evoked Potential (AEP), Functional Magnetic Resonance Imaging (fMRI), Diffusion Tensor Imaging MRI (DTI-MRI), and other techniques to study brain processes related to attention, epilepsy, and the like. The Neurosciences approaches are not addressed in this series.
The various assessment methodologies used for the assessment of the intellectual functioning of people, include:
Given the sophisticated and detailed nature of thinking process simulations, this methodology will first be addressed here, followed by a critical comparison of simulations with alternative cognitive assessment methodologies. The overall emphasis of this series of brief articles, will thus be on the simulation of thinking processes as a cognitive assessment methodology.
1. Simulation Exercises
Simulation exercises involve dealing with real life cognitive challenges. It may or may not include content specific challenges. Most of these techniques can, however, be regarded as competency-based. Given the fact that they replicate the processing requirements of actual work requirements, their metric properties of construct- and predictive validity in particular, tend to overshadow that of most other approaches to cognitive assessment.
Typical simulations include in-basket exercises, role plays and group exercises, all of which are mostly referred to as assessment center methodologies, as well as knowledge and skill-based games and situational judgement tests which assess technical skills and job-related decision-making capabilities. In addition, there are also simulation games which require the application of detailed, operationalised thinking skills, but which are largely devoid of domain-specific content and thus do not focus on previously acquired knowledge and skills. The latter simulations are best suited to assess cognitive processing tendencies and learning potential across groups and cultural contexts.
The following types of simulation exercises will now briefly be discussed: thinking process simulations, assessment centers, situational judgement tests (SJT) and gaming.
1.1 Thinking process simulations
The assessment methodology which can be described as thinking process simulations, reflects the Information Processing paradigm in Intelligence research. This assessment approach does not rely on job specific content as do most in-basket and other assessment center techniques aimed at measuring managerial or job-related skills. Instead, thinking process simulations involve unfamiliar tasks which require the application of specific information processing competencies. In other words, other than simulations which measure specific knowledge, this approach is largely subject-dependent and content-independent. The theoretical model involved, namely the Information Processing Model (IPM) forms the basis of the specific assessment techniques which represent process simulations, namely the Cognitive Process Profile (CPP) and the Learning Orientation Index (LOI) of Cognadev.
Although supervised, the CPP and LOI are largely self-administered assessments. The standardised delivery and automated scoring of these assessment tools, are aimed at producing consistent and comparable result. Extensive and in-depth reports are generated automatically. Subjective interpretations of a candidate’s performance, therefore do not apply.
The CPP and LOI represent a cognitive assessment methodology aimed at operationalising, externalising and tracking a test candidate’s thinking processes according to thousands of measurement points and feedback loops. These two assessments measure a person’s real cognitive responses to an unfamiliar assessment environment where the person has to make sense of, and meaningfully interpret, both structured and fuzzy information.
In the case of the CPP and the LOI, a test candidate can project their own preferred level of cognitive complexity onto the task (which is indicated as their preferred “unit of information”); apply a preferred stylistic approach (such as an Intuitive, Logical, Random, Metaphoric, Learning or any of the 15 cognitive styles measured) and create meaning in any way, as there are strictly no right and wrong answers in these assessment – especially the CPP. The person can also work at their own pace as time does not affect power or capability scores. This is an important consideration in cognitive test construction as speed and power are separate constructs when it comes to intellectual functioning. The undifferentiated measurement of speed and power in intelligence research also holds implications for the adverse impact of an assessment.
Furthermore, the content of the CPP and LOI is unfamiliar and not knowledge-based and therefore to some extent independent of previous educational and work exposure. Because the CPP and LOI tasks are of an equally unfamiliar nature to all test takers, without presenting the information in a decontextualised or dis-embedded manner, the possibility of group bias of the assessments, is reduced. Conventional ability testing, on the other hand, tends to capitalise on specific content or knowledge domains (such as spatial, verbal, non-verbal item content) while presenting the item content in a dis-embedded manner.
The CPP and LOI both measure cognitive “capability and preference” and predict the way in which a person is likely to perform in the work environment. These assessments do not claim to measure “ability” as alleged by the providers of IQ testing. Other than ability tests, the CPP and LOI also indicate detailed developmental guidelines that can be used for the further development of thinking processes.
The above thinking process simulations for cognitive assessment, and the available techniques namely the CPP and LOI, offer various benefits to test users, including:
- Other than alternative test techniques such as IQ tests, assessment centers, STJs, gaming, data scraping and questionnaires, the thinking process simulations namely the CPP and LOI, are based on a sound theoretical foundation, the Information Processing Model (IPM). Besides the CPP and LOI, no other assessment methodology to date seems based on a self-contained theoretical model with construct validity.
- The processing simulations involved in these assessments are also not interpreted by the candidate or others, but involve real problem solving performance. As in the case of questionnaires and structured interviews, the CPP and LOI thus do not require self-reporting, which introduces measurement error and the justification of past personal performance.
- In addition, the task requirements of the CPP and LOI are not as transparent as that of questionnaires, and the results can therefore not be manipulated by the test taker.
- The problem of subjective rater interpretation, as often is the case with assessment center and structured interview methodologies, is also resolved by the standardised and automated nature of the CPP and LOI assessments and reports, where the results are objectively and algorithmically calculated in terms of thousands of measurement points.
- The CPP and LOI furthermore offer a fundamental solution to the limited, timed and cross-culturally loaded nature of typical IQ and ability tests.
- The CPP and LOI do not only focus on already developed knowledge and skills as in the case of IQ tests and assessment centers but predict learning potential and the acquisition of information processing competence in the future, capitalising on the domain-free content of unfamiliar tasks.
- Thinking process simulations as capitalised on in the case of the CPP and LOI, thus incorporate a sound theoretical basis for the measurement of learning potential by tracking and analysing learning curves as well as processing tendencies in terms of 16 criteria or characteristics of cognitive functioning, to identify strengths, weaknesses and metacognitive awareness, all of which can be addressed developmentally.
- Unlike some structured interviews the CPP and LOI do not reduce the complex concept of learning potential and cognitive modifiability to assumptions regarding age-based prediction of potential.
- The cost involved in time intensive assessment center and structured interview techniques, is in the case of the CPP and LOI reduced by automated and online assessment and automated reporting.
- In the case of the CPP and LOI, the focus is on the practical utility of the results for developmental, selection and placement requirements.
- One of the key advantages of using the CPP and LOI as opposed to alternative methodologies of cognitive assessment, is the lack of adverse impact and cross-cultural bias. The CPP and LOI rely on several design features to ensure valid assessment across groups, including:
- allowing test candidates to apply any of 15 different stylistic approaches to accommodate for personal and cultural preferences in problem solving approach;
- not capitalising on right-or-wrong answers, instead focusing on meaningful conceptualisation, which is scored in terms of certain processing criteria;
- not applying time limitations, in that cognitive speed and power are measured separately;
- the activation of auditory, visual and kinesthetic modes of processing to accommodate for individual and group differences in processing approach;
- the avoidance of decontextualised and disembedded item content to cater for test candidates from contextual language backgrounds;
- the use of test-train-test techniques to gradually introduce unfamiliar task requirements;
- providing interactive feedback on performance, to track learning curves;
- utilizing unfamiliar task content to create equal opportunities for candidates from different educational and socio-economic backgrounds;
- requiring only low level (grade 5 mother tongue) language proficiency for those who are not linguistically skilled;
- not measuring grammar, spelling or sentence construction skills which are largely educationally developed.
- The CPP and LOI have been researched in-depth and the results of validity, reliability and adverse impact studies are summarized in the research manuals.
- The CPP results of adults in the work context are commonly used for purposes of career guidance, selection, placement, development and coaching, succession, identification and development of leadership potential, as well as organisational development. The LOI, aimed at the 16 – 30 age range, is used for purposes of career guidance, bursary allocation, fast tracking, developmental and selection purposes.
1.2 Assessment center methodologies
Assessment center methodologies represent the Contextualist approach to psychological research in that the focus is on the measurement of the competencies required for effective performance within specific knowledge and skill domains.
The use of a variety of customised assessment centers is becoming common practice within organisations aiming to determine technical skills, behavioural tendencies, managerial skill and leadership potential.
Assessment center evaluations largely focus on behavioural and/or conceptual performance within domain specific areas. A variety of techniques are involved including in-basket exercises, leaderless groups, interactive group exercises, skill-specific games and case studies as well as questionnaires. Candidates who are evaluated by means of these exercises are often, but not necessarily, observed live and in real time by raters. Responses can also be evaluated by manual scoring of open-ended questions or by automated means. These techniques can briefly be described as follows:
- Group exercises mostly involve a small group of young professionals or managers who are required to perform a pre-defined task which involves collaboration, decision making and leadership, while being observed by raters in terms of certain performance criteria. These exercises may be fairly time consuming.
- Virtual stylised simulations, in the form of video games that are built around specific business skills, are often used for the screening of young professional candidates. These assessments are popular in large organisations where the aim is to create talent pools.
- Both online and directly observed in-basket exercises, capitalising on real life managerial challenges, are often performed to determine managerial and/or other skills. The scores of candidates with previous exposure to managerial requirements may therefore be elevated.
- Role plays are often used to determine behavioural skills associated with sales or leadership performance. It may also form part of an interview.
- Questionnaires in the form of assignments that are often completed in the candidate’s own time may be used to determine the managerial insight, decision making skill and procedural approach of candidates. Here, the validity of the results may be derailed in cases where test candidates obtain guidance and advice from others.
The benefits of assessment centre methodologies include adaptability to a variety of applications, including real and online games, interviews and questionnaires aimed at measuring different competencies and skill sets. In addition, assessment centres reflect real life work requirements and capabilities, which increase the predictive and face validity involved. Given the subjective nature of rater impressions, inter-rater reliability may pose challenges, though. To alleviate this problem the assessment criteria need to be clearly operationalised and specified in detail. Another challenge is related to the fact that the performance measured by assessment centres is largely affected by previous experience and therefore not suitable for the prediction of learning potential. The use of assessment centre methodology can also be expensive and time consuming. It is, however, widely used and relied on for purposes of selection, placement, succession and the development of people in the work environment.
1.3 Situational Judgement Tests (SJTs) and gaming for screening purposes
Situational Judgement Test or Inventories, abbreviated as STJs or STIs, have been around since the mid-1900’s. These assessments capitalise on realistic workplace scenario’s for recruitment, screening and selection purposes.
The construction of STJs rely on job analyses and the opinions of job experts as most of these assessments are tailor made in terms of particular work requirements. The test content of STJs can be presented through a variety of modalities, including video, audio and printed materials. The test items normally describe work challenges where certain responses need to be selected or prioritised. The goal is to evaluate the appropriateness of a person’s responses or judgements in certain work-related situations. Behavioural tendencies are also inferred as a basis for predicting a person’s role suitability. The test content often directly reflects role-related operational tasks and decisions. STJs are usually not timed.
There are a number of benefits to using STJ assessments. First and foremost, they reflect specific role requirements as the scenario’s specified in the assessment closely overlap with work-related tasks. This normally contributes to the predictive validity of assessment results. STJs can be used to assess a variety of competency constructs, using different techniques, and are relatively easy to develop, customise, administer and score. Online STJs are most appropriate for high volume screening purposes.
The use of STJs is, however, also criticised for certain shortcomings. For one, the value-add of an STJ entirely depends on the quality of the items or the specific test. These assessments are also most appropriate for selecting candidates for operational roles as opposed to strategic or creative roles. In addition, the scoring of STJs remain problematic given the absence of objective criteria for determining the best possible answers. Job experts may, for example, differ as to the most appropriate responses to a situation. In these instances, a consensual scoring approach is often used. The latter may, however, not necessarily appreciate the potential value add of unusual, creative, intuitive or complex logical approaches, though.
Adverse impact effects often characterise STJs given their experience-based, visual, cultural and socio-economic bias. In terms of the metric properties of STJs, it seems that they may lack what is referred to as content validity in that the work samples used as items mostly fail to represent the entire required knowledge and skills base involved. In addition, their content-specificity makes it difficult to investigate their metric properties – the test-retest reliability in particular.
STJ tests, games and simulation exercises overlap and represent assessment centre methodologies.
Gamification which often involves scenario-based items has become a popular screening technique in the recruitment of job candidates. Not only competencies and decision-making skills, but also conventional psychometric constructs related to personality and intellectual functioning are inferred from these techniques. The often quick and easy to use games are usually delivered on mobile and aimed at younger generations. Not only do these techniques access a wider audience for both candidates and employers, but the data can easily be filtered and matched to the competency requirements of work to improve placement decisions. Candidates who have technological skills and experience may well achieve better scores on gamified assessments, without any assurance that the skills measured would necessarily transfer to work-related performance.