If you would like to be notified when new content is posted to our website, subscribe to our email alerts.
Subscribe Now
The Cognadev assessment suite, including the Cognitive Process Profile (CPP) and the Learning Orientation Index (LOI), both of which measure cognition; the Value Orientations (VO) which focuses on worldviews and levels of consciousness; and the Motivational Profile (MP) which identifies motivational drivers, are all somewhat unconventional assessment tools that do not fit the mould of traditional Psychometrics. We are therefore often asked to explain the research approach that was followed in developing these tools.
Our assessment methodologies were primarily created to improve on the shortcomings of existing measurement techniques such as IQ tests, personality questionnaires, and interviews. Here I broadly describe some of the considerations and reservations which shaped the research perspectives and assumptions, as well as the eclectic approach that was used to design and analyse the various methodological approaches.
The scientific status of Psychology is neither explanatory, such as Physics and Maths, nor speculative, like Philosophy. It is a descriptive science characterised by a somewhat weak rational basis combined with empirical observation. Psychological theories lack the technical precision of those typically found in the natural sciences, and research can at best aim to refute specific hypotheses. The models tested in such work are invariably restricted in scope and accuracy, and aimed at being practically useful rather than exemplars of universal laws. The constructs investigated are empirical conveniences rather than a reflection of some fundamentally objective external psychological reality.
To some, this may sound rather uninspiring. However, Psychology is ideally positioned to unlock the potentially rich interface between science and intuition, or between procedural rigour and conceptual creation. It has its finger on the pulse of what it means to be human. In the search for truth, which is the aspirational goal common to all sciences, psychologists are well-set to explore aspects of the cognising subject; the knower who is involved in the structure of reality, and thus the kaleidoscopic complexity of the lenses used in creating science, philosophy and art.
Most psychologists are intrigued by the mysteries of sentience, consciousness, and human existence. This has contributed to the emergence of many different schools of thought ranging from the “rats and stats” psychology of certain Behaviouristic traditions to Humanistic, Psychoanalytical, Existential, Systems, Transpersonal, Integral and Process-Oriented approaches. However, no matter how fascinating are the insights arrived at through these various perspectives, from a broad scientific perspective, psychologists can merely strive to contribute to human development and socioeconomic pragmatics through careful insight, observations, and reasoned interpretations of facts.
Many leading researchers in Psychology including Barrett (2013)[1], Michell (1999, 2009)[2][3], Borsboom et al (2009)[4], Grice (2012)[5] and others, have stated that no psychological characteristic varies as a quantity
This complicates the measurement and analysis of the constructs involved. The statistical techniques that are generally applied have also invited much criticism. Nassim Taleb (2010)[6], an authority on the subject, regards the bell curve as fraudulent; correlational evidence as “so what”; and the concept of the “standard deviation” as leaving much to be desired as it merely is a property of the bell curve (which constitutes only one of several possible ways of modelling data). He points out that equilibrium forces are at work when averaging findings. But randomness can only be meaningfully ‘averaged-out’ in environments devoid of extreme-magnitude phenomenal outcomes. Therefore, when dealing with high variability in a system (as is the case in Psychology), meaning is lost through the calculation of averages and normal distributions. Furthermore, null-hypothesis significance testing used as the yardstick and the ultimate arbiter of the meaningfulness of research findings is referred to by Ziliak and McCloskey (2009)[7] as a cult. They point out that “significant” does not mean “important” and vice versa.
What are the alternatives and how do we arrive at new insights in Psychology?
Closed doors are often beneficial to a field of enquiry as it requires a return to the metaphorical drawing board. The challenge is to design an appropriate and creative solution which is not anchored in “either-or”, but in “and-but” possibilities which combine existing and innovative approaches. The outcomes of this fresh thinking in an area of enquiry will never be perfect, but the pioneering thinking, and the new way of looking at/solving old problems invariably yields an advance in knowledge and understanding, as well as practical benefits for individuals and society.
In developing the Cognadev assessments a sound theoretical foundation was capitalised on in the first instance. Existing and potentially new theoretical models were critically evaluated in terms of certain meta-theoretical criteria including those of: parsimony or simplicity; an appropriate (intermediate) level of theorising; specificity of description; detailed operationalisation of the theoretical constructs; the structural adequacy of the model and its fit within the broader nomological network; and the practical utility of the model and the findings.
In terms of the research itself, the emphasis was on the design of considerable amounts of action research, appropriate statistical analysis, and the careful use and blending of intuitive insights. Such an approach reflects the recommended approach of David Freedman (2009)[8], a mathematical statistician. He suggested that the strong emphasis on inferential statistics of traditional psychological research should be adapted. Statistics alone cannot substitute for a good research design, prudent empirical observations, and reasoning from a comprehensive knowledge of the issue under investigation. He recommended a “low-tech” approach which “relies on intimate knowledge of the subject matter and a meticulous research design that eliminates rival explanations”. He points out that such an approach also offers a potential solution for dealing with the complexities that mathematics introduces and for managing the conditionality of statistical inference.
The process of creating the CPP was instigated by an in-depth investigation of the literature. Action research, compassionate engagement with the hundreds and later thousands of individuals who comprised both research and workplace samples, and an intuitive inclination brought new insights and resulted in the formulation of a unique self-contained model of holonically-organised thinking processes. The construct validity of this model was investigated using a MTMM (multitrait-multimethod) design and the application of linear structural equation modelling to determine its convergent and discriminant validity.
Next, an assessment technique in the form of a simulation exercise was designed to operationalise and externalise information processing activities at a micro level for automated tracking of the responses or thinking processes, in tenths of a second. These responses were thus not “answers” but the behavioural outputs of processes that were tracked according to thousands of pre-specified measurement points. An algorithmically-based expert system was then devised to analyse and interpret the recorded responses of a test candidate. The results informed the development of a comprehensive automated report generator. Once the CPP was deployed and data became available, the results were further subjected to in-depth statistical analysis by using the most advanced and appropriate statistical techniques.
In developing the LOI, the developmental process was largely similar as that of the CPP. Although the LOI task completely differs from that of the CPP, it is also unfamiliar and yet similarly straightforward in terms of its task-demands. The LOI measures the way in which a person deals with complexity through the application of metacognitive criteria. As is the case with the CPP, the LOI aims at diagnosing a person’s cognitive preferences and capabilities as well as learning potential for purposes of selection, placement, development and career pathing. Whereas the CPP was developed for adults in the work environment, the LOI was specifically aimed at the assessment of Generation Y and Millennials.
The Value Orientation (VO) and Motivational profile (MP) assessments each possess widely differing design and measurement methodologies, and each is very different from the cognitive assessment approaches which characterise the CPP and LOI tests. In addition, as with the CPP and LOI, neither assessment fits the traditional psychometric/statistical test-theory models found within typical non-cognitive self-report questionnaires.
Comprehensive technical manuals compiled by myself and Paul Barrett for all the Cognadev assessments are available on request. These are designed to provide the necessary coherent evidence-base for reliability and validity in support of the use of each test in the workplace and elsewhere. The ongoing refinement and improvement of all the tools is a feature of Cognadev’s R&D strategy.
To conclude: the innovative assessment methodologies provided by Cognadev are rooted in a critical evaluation of the field of Psychology and a large body of available research findings and theoretical models; combined with a practical, hands-on and compassionate/insightful approach to action research as well as intuition and rigorous statistical analyses. The assessment tools have since been implemented by a large number of organisations globally and across industries for purposes of selection, placement, development, team compilation, career guidance and organisational development purposes. Many corporate clients have used the tools for more than a decade, such is their enduring pragmatic utility in providing unique and meaningful psychological information about a test-taker’s values, motives, and cognitive preferences and capabilities.
[1] Barrett, P. & Prinsloo, M. (2013). Rethinking Reliability and Validity of psychological measurements
[2] Michell, J. (1999). Measurement in Psychology: Critical History of a Methodological Concept. Cambridge University Press. ISBN: 0‐521‐62120‐8.
[3] Michell, J. (2009). Invalidity in Validity. In Lissitz, R.W. (Eds.), The Concept of Validity: Revisions, New Directions, and Applications (Chapter 6, pp. 111‐133). Charlotte: Information Age Publishing.
[4] Borsboom, D., Cramer, A.O.J., Kievit, R.A., Scholten, A.Z., & Franic, S. (2009). The end of construct validity. In Lissitz, R.W. (Eds.). The Concept of Validity: Revisions, New Directions, and Applications (Chapter 7, pp. 135‐170). Charlotte: Information Age Publishing
[5] Grice, J.W., Barrett, P.T., Schlimgen, L.A. & Abramson, C.I. (2012). Toward a Brighter Future for Psychology as an Observation Oriented Science. Behavioral Science, 2012, 2(1), 1-22; doi:10.3390/bs2010001
[6] Taleb, N. (2010). The Black Swan: The Impact of the Highly Improbable. New York: Random House Trade Paperbacks.
[7] Ziliak, S. T, & McCloskey, D. N. (2009). The cult of statistical significance: How the standard error costs us jobs, justice, lives. University of Michigan Press.
[8] Freedman, D. (2009). Statistical Models and Causal Inference: A Dialogue with the Social Sciences. University of California, Berkeley. ISBN: 9780521123907