©jro-grafik / Fotolia

Psychological Assessment: Beyond Self-Report Questionnaires

November 1, 2016 | By Paul Barrett


If you would like to be notified when new content is posted to our website, subscribe to our email alerts.

Subscribe Now

 

The Usual

Imagine the conventional psychometrician within a test publisher or academic department, designing a test to assess two attributes which are thought relevant to job performance in the workplace: a preference for Clarity and Ambiguity. The likely design steps are:

Step 1: envisage them as a single bipolar construct.

Step 2: Create maybe 20 or so items to assess the meaning of the construct; hide this meaning as best you can from the test-taker (or at least, do not let them know what you are assessing).

Step 3: Item responses vary from “yes-no” through to a Likert multiple-choice format.

Step 4: Use classical or item-response test-theory methods to produce the scale of items, and associated psychometric reliability and validity evidence.

Typical items might look like:

  • I prefer the kind of work where my job is clearly defined from the outset.
  • I don’t like having to cope with conflicting role-requirements in my job.

 

The Next Innovation: Single-item graphical profilers

Now put-aside all the statistical test-theory psychometrics. Instead, consider the problem afresh. The result is the assessment created for a New Zealand organization (Mariner7 Ltd) some years ago in 2001.

Step 1: Define both Clarity and Ambiguity within a single coherent semantic statement for each construct. This is what will be shown to the test-taker on the test.

Step 2: Do not assume they are polar opposites, assess each simultaneously.

Step 3: Add a second dimension: a judgement for how much a person prefers to balance their time at work experiencing each.

Step 4: Acquire responses from a single graphical ‘item’, and display the meaning of the response (what the organization will see) in text for the test-taker to see and agree with before moving on to the next ‘item’. What they see described in text changes in line with how they move the sliders:

 

Figure 1: The Mariner7 assessment graphical profiler item-format

 

The New Frontier: Beyond Self-Report

Cognadev has been at the forefront of this frontier for many years with two of its pioneering assessments: The Cognitive Process Profile (CPP) and the Learning Orientation Index (LOI).

The CPP is an online computer-administered simulation exercise that presents the test taker with unfamiliar information. It can in some respects be compared to assessment centres, but the CPP does not capitalise on domain specific information, previous experience, subjective observations etc. While the person completes the CPP by moving cards around in trying to understand and conceptualise the meaning of symbolic messages, all movements are tracked according to thousands of measurement points.

The aim is to measure thinking processes – complexity preferences and capabilities in particular. The person’s responses are then analysed algorithmically by an expert system and an automated report is generated. Assessment may take anything from 1 – 3 hours as there are no time limitations.

The cognitive constructs measured by the CPP include:

  • cognitive styles (preferences and response tendencies including Logical, Intuitive, Holistic, Learning, Structured and other styles)
  • information processing competencies (including exploration, analysis, structuring, integration, transformation, memory and judgement capability)
  • learning potential and cognitive modifiability
  • complexity preferences and capabilities (units of information)
  • a current and potential level of work (linked to the Stratified System Theory)
  • speed and power aspects
  • cognitive strengths and development areas

Judgement capability, in specific, aimed at clarifying ambiguous and discrepant information, is measured by presenting the test taker with systematically varied unfamiliar, unstructured and/or fuzzy information. Processing activities related to:

  • (a) becoming aware of vagueness;
  • (b) optimally exploring available clues;
  • (c) capitalising on intuition to clarify missing information;
  • (d) evaluating alternative options;
  • (e) contextualising own conclusions in terms of the purpose of the task and
  • (f) making a decision, are tracked in tenths of seconds.

These processes are guided by the application of metacognitive criteria including relevance, clarity, precision, coherence, and purposefulness, all of which are tracked in detail by the CPP.

The CPP report comments on the overall cognitive functioning of the person and provides developmental guidelines. Most multinationals and executive search companies use it for the assessment of general managers, executives and senior professionals. However, it can be used on school and university leavers and to identify the cognitive potential of people in operational environments as well.

The (LOI) is based on the same principles as the CPP but has completely been redesigned for the assessment of Generation Y and Millennials.

The theoretical models, dynamic assessment methodology, and computer-based expert-system scoring algorithms used within the CPP and LOI are unique; the key feature of both is that they form assessments of psychological attributes and functioning from actual behaviours, not self-reports of behaviours.

This move away from long and usually tedious self-report questionnaires has now begun among the major ‘corporate’ test publishers, many of whom are exploring the gamification of assessment and those like Hogan-X who are following the lead of Michael Kosinski and colleagues[1], analysing the activity/digital footprint of an individual on social media sites in order to form judgements about their personality.

The problem is though that unlike the CPP and LOI’s careful/in-depth design processes and capitalization on theories, experimental results, and concepts spanning cognitive through to integral psychology, these new ventures are light on psychology and heavy on technology or data-analytics. That’s not a recipe for accuracy or science in the design of assessments of human cognition and personality[2].

Maretha Prinsloo, the designer/author of the CPP and LOI, showed an unwavering dedication to ‘getting it right’ rather than ‘just getting something fashionable done’. In a completely different assessment domain, Sid Irvine and his team at Plymouth University showed the same dedication when producing their assessment tests now used by many armed forces worldwide. In his recent book[3] describing the genesis of the British Army Recruit Battery (BARB) and the extension of the work into other countries’ military recruitment schemes, Sid also shows what truly serious innovation in test design looks like.

Like the CPP and LOI, what sets apart BARB is the huge input from psychological theory and experiments, coupled with the clear thinking about cognition. Theirs is not test-design which is the response to a ‘must have a shiny new thing’ marketing strategy; rather, these tests are designed from the outset using strong theory, experimentation, and in the case of BARB, the creation of a completely new kind of item grammar and construction process, enabling on-the-fly item generation.

Right now, there are only two Next-Generation performance-based models of assessment in town:

  • Maretha Prinsloo’s CPP and LOI
  • Sid Irvine & colleagues’ item grammar for on-the-fly item generation

Although quite different in purpose, stimuli, and scoring procedures, they are nonetheless proven innovations in assessment; pioneering and successful in their specific assessment domains.

 

[1] Park, G., Schwartz, H.A., Eichstaedt, J.C., Kern, M.L., Kosinski, M., Stillwell, D.J., Ungar, L.H., & Seligman, M.E.P. (2015). Automatic personality assessment through social media language. Journal of Personality and Social Psychology, 108, 6, 934-952.

[2] Mazzocchi, F. (2015). Could Big Data be the end of theory in science? A few remarks on the epistemology of data-driven science. EMBO Reports, 26, 10, 1250-1255.

[3] Irvine, S.H. (2014). Computerised test generation for cross-national military recruitment: A Handbook. Amsterdam: IOS Press