Tag Archives: CPP

Intellectual Capital Management: Practical guidelines for an Intellectual Capital Management solution

By Paul Barrett on September 17, 2019

©metamorworks / adobe.stock.com


In this third blog of the 4-part series, we outline our practical guidelines for helping choose an assessment solution.

In order to manage talent, one needs to understand both the work-related requirements and the psychological factors involved. The assessment of these aspects offers a solid foundation for the entire value chain of people- or talent management. We recommend an assessment solution which is guided by the following principles:

  • Cognitive functioning forms an integral and important part of work performance. It refers to a dynamic, adaptive and multi-dimensional factor incorporating intellectual, motivational and consciousness factors, all of which need to be contextualised. A holistic approach is thus required for its measurement and management in the work environment.
  • Besides the psychological factors, cognitive functioning at work is also affected by a wide range of variables such as previously acquired knowledge and skills; physical health and well-being; nutritional factors; past and future exposure and learning opportunities; socio-economic background and circumstances; educational and cultural factors; relationships, love and acceptance by significant others; and spiritual factors related to a personal sense of purpose, to mention but a few.
  • The work and home environments in which a person operates are therefore crucial as cognitive functioning cannot be separated from its context or seen in isolation.
  • To leverage the concept of Intelligence in the work environment involves more than merely “how intelligent” a person is. Each person is perfectly suited to a particular kind of work. Appropriate and sufficient information is thus required to understand, position and develop cognitive functioning.
  • Conventional Psychometric test methodologies are often flawed and rely on limited techniques. The idea is to move away from such inadequate and cross-culturally loaded test practices and to cease assessing educationally acquired skills as is the case with various intelligence test methodologies.
  • More robust methodologies are required, based upon validated theoretical models and automated simulation techniques which operationalise, externalise and track thinking processes. Given the complexities involved in mental functioning, the analysis of candidate responses by means of algorithmic expert systems, AI or machine learning and fuzzy logic, will contribute to the richness in interpretations of assessment results.
  • The ultimate goal of an assessment battery remains that of discovering and mapping the unique territory of an individual mind to honour and position their distinctive repertoire of talents in a way which will facilitate the full realisation of their potential.


The implementation of these principles, as part of an Intellectual Capital Management solution, may involve the following action steps and products, or assessment methodologies:

1. The training and accreditation of HR practitioners in terms of the underlying theory, measurement and utilisation of the constructs which underlie competence at work. For this purpose, Cognadev provides in-depth e-Learning courses on cognition, motivational drive and levels of consciousness, otherwise referred to as valuing systems. The models and methodologies covered by these courses are at present not yet addressed by university courses.

2. Facilitated discussions between HR and line functions are required to contextualise the entire talent management approach. Here the focus should be on the nature and prospects of the industry, the organisational value- proposition, and the core competence of the organisation. The core organisational competencies will inform the job-specific competency requirements to which candidate profiles can be compared. For this purpose, functional job families also need to be identified.

3. This is followed by a job-analysis to determine the Stratified Systems Theory (SST) levels of work complexity of specific job families, as well as the identification of 10 to 12 job-related competency requirements of those roles or job families. The competency definitions need to reflect the correct SST level of work of a position or role, to inform job specs and facilitate the appointment of suitable role players. This is done by means of the Contextualised Competency Mapping (CCM) tool of Cognadev.

4. An organisational audit by means of a volume assessment system further contributes towards an understanding of the talent within the organisation. In addition, a mass recruitment exercise aimed at creating a virtual talent pool, may be useful. Together these mass assessment initiatives will resolve the typical succession problems related to crowding and vacuum. The creation of virtual talent pools will become more critical as the new world of work, characterised by a-typical organisational structures around project-based undertakings, emerges. Cognadev provides a volume assessment tool, called Cliquidity, which holistically assesses candidates and possesses the required functionalities to enable organisational audits and the creation of virtual talent pools.

5. In terms of the assessment of people, cognition, motivation as well as values and culture, need to be addressed as these three factors form a crucial foundation of any intellectual capital management solution:

  • The culture of the organisation is best determined by assessing the executive as well as representative groups of employees from various regions or functional units. Understanding the organisational culture requirements will optimise selection, placement, team compilation, leadership, developmental, and succession solutions within the organisation. For this purpose, Cognadev provides the Value Orientation (VO) assessment tool which is largely based on the Spiral Dynamics (SD) model.
  • Seeing that the proposed Intellectual Capital Management approach mainly rests on levels of work complexity, the cognitive preferences and capabilities of candidates need to be assessed. For the cognitive assessment of existing staff, Cognadev provides the Cognitive Process Profile (CPP) while the assessment of school and university leavers can be undertaken by means of the Learning Orientation Index (LOI). For purposes of mass recruitment and organisational audits, a low-cost volume assessment, the Cliquidity Adaptive Reasoning Assessment (CARA) can be used.
  • Motivation too, is a critical prerequisite of work performance. Constructs related to drive and energy; factors that could energise a person or drain their energy, self-insight, energy themes, defence mechanisms, life scripts, dynamic personality patterns, EQ and motivational patterns, amongst others, can be measured using a non-transparent assessment tool, called the Motivational Profile (MP).

6. Assessment results need to be reported upon by integrating the cognitive, values and motivational profiles of role players with the competency requirements of their work. If possible, not only the psychological results, but information on a candidate’s knowledge and experience as well as performance ratings should be covered by the integrated reports. Seeing that it can be a tiresome and time-consuming job to compile hand-written reports, Cognadev has developed an automated report generator, namely the Integrated Competency Report (ICR). These integrated person-job matching competency reports are also very useful for feedback purposes and for future performance discussions between assessment candidates and their managers.

7. Individualised feedback to test candidates contributes to their personal development as it enhances self-insight, informs their career and development decisions and optimises interpersonal functioning. Feedback can be done through the provision of written reports, or more ideally, through personal discussions with HR practitioners and/or managers. At executive levels the involvement of a psychologist or an executive coach may be required. Group feedback can also be provided and create an opportunity for team development.

8. Performance evaluations in terms of job-related competencies further contribute to Intellectual Capital Management. 360 Degree online performance questionnaires, completed by the individuals themselves, their peers, managers and subordinates, may also contribute to employee performance, self-insight and engagement. Performance discussions aimed at realistic and honest feedback on the person’s strengths and development areas are aimed at culminating in a developmental plan. The CCM job analysis tool offers a 360 Degree competency evaluation questionnaire.

9. The above-mentioned competency-based approach to talent and intellectual capital management, will enable the integration and alignment of all HR functions, including:

  • recruitment
  • selection and placement or talent acquisition and retention
  • succession planning and promotion
  • performance management
  • individual and team development
  • team compilation
  • career guidance and development
  • job structuring
  • organisational development (OD) as well as
  • remuneration and compensation.

10. The entire approach should ideally be accompanied by a ROI evaluation and reporting.

The validity of the CPP report

By Paul Barrett on August 23, 2019


There are several factors that could render a CPP result invalid. This blog outlines some of those factors that could potentially impact on a candidate’s CPP results report.


The factors contributing to invalid CPP results:

There are a number of factors that could render CPP results invalid. Acceptable test conditions with regards to noise, temperature, lighting and privacy, are prerequisites for obtaining valid assessment results. Factors such as incorrect administration; inadequate language proficiency (a grade 5 mother tongue language proficiency is required for a valid CPP result); computer illiteracy; emotional factors including anxiety, demotivation, preoccupation and depression; health-related factors; disabilities and medication amongst other factors, may all affect the validity of a CPP assessment result.

The most common factors contributing to invalid CPP results are, however, anxiety, demotivation and inadequate language proficiency.

In some cases, invalid reports will either not be scored by Cognadev or will be flagged as “Validity questionable”. It is, however, the task of the CPP practitioner involved, to clarify whether a report is valid or not.


The most common causes of invalid reports:

Practitioners can determine the validity of CPP reports by checking for the most common cause of invalid reports, namely performance anxiety. They can also contact a Cognadev consultant for an expert opinion.

Anxiety or preoccupation, both of which may affect concentration, are usually indicated by significantly lower scores on the “Pragmatic” and “Judgement” dimensions as compared to other processing scores in the CPP report. These two scores indicate the candidate’s task focus.  Relatively low scores on “Pragmatic” and “Judgement”, in combination with a “Trial-and-error” and/or “Reactive” style, may indicate performance anxiety or preoccupation during the time of the assessment.  Please note that these two styles alone do not indicate an invalid CPP result but may actually reflect a tendency to go about problem-solving in an unplanned, reckless or superficial manner in unfamiliar contexts. In other words, “Trial-and-error” and/or “Reactive” styles may be a valid reflection of a person’s cognitive approach.

Candidates from disadvantaged educational backgrounds may show undeveloped cognitive capability. They are likely to obtain relatively low scores on the “Analytical”, “Logical” and “Judgement” processing competencies of the CPP. In other words, they are unlikely to independently analyse issues by pulling them apart and critically reason about further possibilities. Instead, there may be a tendency to memorise, rely on intuition and capitalise on previous experience. Their “Learning“and “Memory” scores are thus often significantly higher than all the other processing scores. In the case of educationally disadvantaged candidates, it should be pointed out that analytical skills can be acquired relatively easily via cognitive training aimed at the internalisation of certain metacognitive criteria. Cognadev can advise HR practitioners in this regard.

Although CPP reports characterised by significantly lower scores on “Logical reasoning” and “Verbal Conceptualisation” only, as compared to the rest of the processing profile, are not necessarily invalid, they indicate a personality- or culturally-based resistance to transformational thinking.  It thus shows a preference for the familiar and a tendency not to apply critical thinking or to reconceptualise issues. This tendency may also affect the candidate’s performance in everyday life and work. Such a profile may at times indicate a degree of demotivation.


The CPP has not been devised to diagnose neurological problems caused by factors such as trauma, long-term stress, substance abuse and/or psychiatric conditions. Alternative evidence of a person’s cognitive functioning may be required in the case of possibly invalid CPP reports. This can be obtained through structured interviews, performance appraisals, assessment centre evaluations or other techniques.

Invalid CPP reports nevertheless offer valuable information that can be interpreted qualitatively.


To read more on the reassessment of the Cognitive Process Profile, follow the link provided here: https://www.cognadev.com/reassessment-of-the-cognitive-process-profile.

Facilitated and/or Interpreted Assessments

By Maretha Prinsloo on August 15, 2019

Untitled design (15)

This article on facilitated and interpreted assessment methodologies forms the third of a four-part series on cognitive assessment techniques aimed at selection, placement and development of people in the educational and work contexts. The first part entailed a discussion of simulation exercises, the second a review of Conventional Psychometrics and the fourth part is a comparative summary of these various approaches.

Here the focus will be on Structured interviews, 360-degree evaluations as well as data scraping and artificial intelligence (AI) solutions.

Facilitated or Interpreted Methodologies


3.1 Structured interviews

Although there is ample evidence from meta-studies that unstructured interviews are no less effective than structured interviews in predicting work performance, the use of structured interviews is an established and widespread practice in selection and placement, especially with regards to the placement of people in key roles in organisations.

As in the case of assessment centre observations, the validity of structured interview results depends on a number of factors. The insight, skill and objectivity of the interviewer and the verbal skill, honesty, accuracy and objectivity of the interviewee are, for example, crucial factors in determining the metric properties of interview results.

Several structured interview techniques are available for the assessment of a person’s current and potential job suitability and career progress. The results of these are often assumed as indicative of the cognitive functioning of candidates. A well-known example of a structured interview technique is Gillian Stamp’s Career Path Appreciation (CPA) which is based on the Stratified Systems Theory of Elliott Jaques. According to the SST, various levels of work complexity can be identified based on the time span involved in the implementation of decisions. The CPA interview reflects the principle that both work and the human capacity to master that work, are hierarchically stratified, where higher levels entail greater levels of work complexity.

Stamp’s structured CPA interview technique consists of three sections focusing on phrases, symbols and work history. The interviewee has to select phrase cards which best describe their approach to work and then elaborate on those choices. In addition, a card sorting task using geometric symbols requires the test candidate to discover a sorting rule. This card sorting task informs the stylistic preference of the candidate. In the work history part of the interview, the candidate has to provide a chronological description of past work assignments, their personal experience of the associated level of challenge involved and the time span of these tasks.

A candidate’s CPA test result to some extent depends on the interviewer’s understanding and classification of the candidate’s explanation of their career preferences and progress to date, as well as their future ambitions. The CPA yields an indication of a person’s current- plus a prediction of their future work capacity. The latter is referred to as “mode”. The mode is determined according to empirically derived age-related progression curves which indicate levels of work progress. Stamp’s work laid the foundation for the development of a large number of relatively similar structured interview techniques.

Benefits of structured interviews aimed at evaluating cognitive competencies are that they inform person-job matching for selection and placement decisions, leadership identification and development as well as organisational development initiatives.

Structured interviews are also characterised by certain shortcomings. The subjective perceptions of interviewers and thus the challenge of inter-rater reliability, for example, remains a serious challenge. Further factors that may derail the predictive validity of structured interviews include the test candidate’s verbal skills, warmth and personality orientation. Important too is the degree of rapport between the interviewer and the interviewee. Since the test candidate is expected to report on their own performance, instead of actually performing a task, the individual’s hindsight of their own functioning may include the justification of past performance, possible overgeneralisation or exaggeration of previous work-related achievements. Cultural factors, ambition, honesty and self-image are all factors which may also skew the results.

A serious weakness of structured interviews such as the CPA is that these techniques often capitalise on the current position and the career history of the individual, which may well skew the outcome of the results. By basing a finding regarding a person’s ideal work environment on their current position, can almost be seen as a circular argument. In addition, while many people would prefer to talk about their work rather than to do an assessment, some personality types, such as introverts, and/or unassuming individuals, may be underestimated in interview situations. Skilled and wise interviewers, as well as clear scoring criteria, are therefore required in the case of structured interviews.

Regardless of possible criticism, structured interviews such as the CPA, which are linked to the requirements of work environments as specified by the SST model, are widely and effectively applied for job- and organisational structuring as well as for people-job matching, succession and remuneration purposes.

3.2 360-degree evaluations

Human Resources decisions regarding the placement, promotion, succession, team compilation, development and remuneration of people, can significantly benefit from valid performance appraisal data. Unfortunately, the latter is normally subjective in nature and typically of poor quality. The use of 360-degree competency-based questionnaires, also referred to as multi-rater or multi-source feedback techniques are thus often deployed in order to introduce some degree of objectivity in this regard; to gather various opinions and to provide anonymous feedback to role players on how others perceive their performance.

The raters involved may include the candidates themselves as well as their managers, peers, subordinates and other stakeholders. The competencies according to which performance is measured, are usually operationalised in terms of observable behaviours, and contextualised to reflect the strategic aims and core competence of the organisation as well as its culture. The evaluations are mostly online, require 20 – 30 minutes to complete, are largely qualitative and involve the identification and rating of work-related strengths and weaknesses. The results usually inform performance feedback and development plans or training initiatives.

Given the subjective nature of these interpersonal evaluations, the results are often biased and fail to meet the metric criteria of validity and reliability. 360-Degree procedures inherently also emphasise employee shortcomings which may cause demotivation and sap morale. Cumbersome data collection processes may even be involved. In addition, the use of 360-degree feedback may encourage employees to manage their image as opposed to concentrating on their own value add within the organisation. Halo effects may prevail in that those who make a favourable social impression, or who come across as extraverted, are normally regarded as more intelligent than those who communicate less, are task-focused and pursue long terms goals as opposed to immediately observable results. Whether 360-degree performance evaluations contribute to employee performance, however, remains a controversial issue.

The use of this assessment methodology has, nevertheless, been around since the 1950s and remains widely implemented in most large organisations. It seems that the use of 360-degree evaluations is, however, far from ideal in capturing the cognitive skills and potential of the candidate involved.

3.3 Data scraping and Artificial Intelligence (AI)

The use of Artificial Intelligence (AI) and machine learning techniques have recently opened up new avenues in people assessment. Within the assessment sphere, scraping, or the extraction of data from websites, has become very popular. Automated scraping of web pages is enabled by techniques such as parsing for text extraction; the harvesting of cloud-based platforms; and text pattern matching.

Given the availability of the social media profiles of most people, social media data analysis has become the most viable source of information to HR practitioners and recruiters. The techniques used to analyse personal data, mostly focus on the profiling of personality. For this purpose, the big five personality characteristics, or OCEAN framework (Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism) is mostly relied and reported on. Cognitive characteristics too are inferred from social media activities.

The information that is electronically gathered on personality and cognitive attributes can be used for diverse purposes including targeted marketing, commercial competition, political and market manipulation, as well as for HR purposes such as recruitment and the matching of people and job profiles to inform placements. Through job and resume analyses aimed at optimising job fit, for example, these techniques largely reduce the time and effort to recruit, place, develop and retain employees. Artificial intelligence (AI) and machine learning technology combined with competency analyses are thus used to significantly refine talent management practices.

Several benefits can be identified for the use of AI and data scraping techniques within the recruitment domain. For one, it seems that data scraping can potentially generate more accurate results than self-report psychometric personality tests, although this only seems to apply for screening purposes. Providers of data scraping software have also shown the superiority of their technique compared to 360-degree evaluations in that just 10 “likes” on social media can appraise a person’s profile better than his colleagues. The techniques can also be implemented quickly and effectively and thus hold significant financial and logistical benefits.

AI solutions for people assessment are, however, also characterised by certain weaknesses. Potential job applicants who are not active on social media and/or who have limited digital footprints may be excluded from employment opportunities. Especially in the case of high stakes employees and leadership roles, a more substantial assessment approach is required. Malicious scraping to steal information or use it for illegal purposes has also surfaced, as was the case with the now infamous Cambridge Analytica where personal data was leaked to politicians and marketeers to inform political manipulation strategies.

Many applicant tracking and data scraping systems are currently available. Investment in their further improvement may even render human involvement in recruitment redundant, which will undoubtedly impact on the quality of the process involve, not to mention the outcomes. In the meantime, the goal remains to fully automate high volume recruitment.

Simulation Exercises

By Maretha Prinsloo on August 15, 2019

Untitled design (15)


Based on decades of research findings regarding the predictive validity of intelligence tests in the work environment, most Human Resource (HR) practitioners regard intellectual functioning as the best psychometric indicator of work performance. Intellectual functioning largely refers to a person’s cognitive preferences and capabilities in terms of learning, problem solving, understanding, conceptualisation, decision making and responding. Organisations thus tend to opt for the use of cognitive assessments for the selection and placement of employees.

In this series of four articles, of which this is the first, various methodological approaches to the assessment of intellectual functioning are reviewed and summarised. The articles also touch on the specific theoretical assumptions which underlie the assessment methodologies of the various schools of thought within intelligence research.

The most common approaches capitalised on in this regard are the Differential, Information Processing, Developmental, Contextualist and the Neurosciences paradigms. Whereas the most commonly applied test methodology, namely “ability testing” which is associated with the Differential approach in Psychology, focuses on domain-specific knowledge and the application of logical-analytical skills; the Information Processing approach tracks dynamic thinking processes; the Developmental approach measures the acquisition of age-related intellectual and behavioural skill; the Contextualist position analyses context- and culture specific cognitive competencies; and the Neurosciences approaches focus on brain activity.

Differential psychology assumes that “ability” reflects domain specific skills which results from a combination of hereditary and education factors. It is referred to as the “what” of intelligence. The Information Processing approach, on the other hand, concentrates on the “how” of thinking processes; and tends to externalise and track cognitive “preferences and capabilities” through means such as algorithmically driven expert systems. It is thus subject-dependent and largely transcends domain specific content. The Contextualist paradigm, which underlies the assessment of cognitive competencies, emphasise the “where” and “when” of intelligence within specific cultural- and environmental settings; and the Neurosciences approach capitalises on advanced measurement techniques including Transcranial Magnetic Stimulation (TMS), Electroencephalography (EEG), Average Evoked Potential (AEP), Functional Magnetic Resonance Imaging (fMRI), Diffusion Tensor Imaging MRI (DTI-MRI), and other techniques to study brain processes related to attention, epilepsy, and the like. The Neurosciences approaches are not addressed in this series.

The various assessment methodologies used for the assessment of the intellectual functioning of people, include:

Given the sophisticated and detailed nature of thinking process simulations, this methodology will first be addressed here, followed by a critical comparison of simulations with alternative cognitive assessment methodologies. The overall emphasis of this series of brief articles, will thus be on the simulation of thinking processes as a cognitive assessment methodology.

1. Simulation Exercises

Simulation exercises involve dealing with real life cognitive challenges. It may or may not include content specific challenges. Most of these techniques can, however, be regarded as competency-based. Given the fact that they replicate the processing requirements of actual work requirements, their metric properties of construct- and predictive validity in particular, tend to overshadow that of most other approaches to cognitive assessment.

Typical simulations include in-basket exercises, role plays and group exercises, all of which are mostly referred to as assessment center methodologies, as well as knowledge and skill-based games and situational judgement tests which assess technical skills and job-related decision-making capabilities. In addition, there are also simulation games which require the application of detailed, operationalised thinking skills, but which are largely devoid of domain-specific content and thus do not focus on previously acquired knowledge and skills. The latter simulations are best suited to assess cognitive processing tendencies and learning potential across groups and cultural contexts.

The following types of simulation exercises will now briefly be discussed: thinking process simulations, assessment centers, situational judgement tests (SJT) and gaming.

1.1 Thinking process simulations

The assessment methodology which can be described as thinking process simulations, reflects the Information Processing paradigm in Intelligence research. This assessment approach does not rely on job specific content as do most in-basket and other assessment center techniques aimed at measuring managerial or job-related skills. Instead, thinking process simulations involve unfamiliar tasks which require the application of specific information processing competencies. In other words, other than simulations which measure specific knowledge, this approach is largely subject-dependent and content-independent. The theoretical model involved, namely the Information Processing Model (IPM) forms the basis of the specific assessment techniques which represent process simulations, namely the Cognitive Process Profile (CPP) and the Learning Orientation Index (LOI) of Cognadev.

Although supervised, the CPP and LOI are largely self-administered assessments. The standardised delivery and automated scoring of these assessment tools, are aimed at producing consistent and comparable result. Extensive and in-depth reports are generated automatically. Subjective interpretations of a candidate’s performance, therefore do not apply.

The CPP and LOI represent a cognitive assessment methodology aimed at operationalising, externalising and tracking a test candidate’s thinking processes according to thousands of measurement points and feedback loops. These two assessments measure a person’s real cognitive responses to an unfamiliar assessment environment where the person has to make sense of, and meaningfully interpret, both structured and fuzzy information.

In the case of the CPP and the LOI, a test candidate can project their own preferred level of cognitive complexity onto the task (which is indicated as their preferred “unit of information”); apply a preferred stylistic approach (such as an Intuitive, Logical, Random, Metaphoric, Learning or any of the 15 cognitive styles measured) and create meaning in any way, as there are strictly no right and wrong answers in these assessment – especially the CPP. The person can also work at their own pace as time does not affect power or capability scores. This is an important consideration in cognitive test construction as speed and power are separate constructs when it comes to intellectual functioning. The undifferentiated measurement of speed and power in intelligence research also holds implications for the adverse impact of an assessment.

Furthermore, the content of the CPP and LOI is unfamiliar and not knowledge-based and therefore to some extent independent of previous educational and work exposure. Because the CPP and LOI tasks are of an equally unfamiliar nature to all test takers, without presenting the information in a decontextualised or dis-embedded manner, the possibility of group bias of the assessments, is reduced. Conventional ability testing, on the other hand, tends to capitalise on specific content or knowledge domains (such as spatial, verbal, non-verbal item content) while presenting the item content in a dis-embedded manner.

The CPP and LOI both measure cognitive “capability and preference” and predict the way in which a person is likely to perform in the work environment. These assessments do not claim to measure “ability” as alleged by the providers of IQ testing. Other than ability tests, the CPP and LOI also indicate detailed developmental guidelines that can be used for the further development of thinking processes.

The above thinking process simulations for cognitive assessment, and the available techniques namely the CPP and LOI, offer various benefits to test users, including:

  • Other than alternative test techniques such as IQ tests, assessment centers, STJs, gaming, data scraping and questionnaires, the thinking process simulations namely the CPP and LOI, are based on a sound theoretical foundation, the Information Processing Model (IPM). Besides the CPP and LOI, no other assessment methodology to date seems based on a self-contained theoretical model with construct validity.
  • The processing simulations involved in these assessments are also not interpreted by the candidate or others, but involve real problem solving performance. As in the case of questionnaires and structured interviews, the CPP and LOI thus do not require self-reporting, which introduces measurement error and the justification of past personal performance.
  • In addition, the task requirements of the CPP and LOI are not as transparent as that of questionnaires, and the results can therefore not be manipulated by the test taker.
  • The problem of subjective rater interpretation, as often is the case with assessment center and structured interview methodologies, is also resolved by the standardised and automated nature of the CPP and LOI assessments and reports, where the results are objectively and algorithmically calculated in terms of thousands of measurement points.
  • The CPP and LOI furthermore offer a fundamental solution to the limited, timed and cross-culturally loaded nature of typical IQ and ability tests.
  • The CPP and LOI do not only focus on already developed knowledge and skills as in the case of IQ tests and assessment centers but predict learning potential and the acquisition of information processing competence in the future, capitalising on the domain-free content of unfamiliar tasks.
  • Thinking process simulations as capitalised on in the case of the CPP and LOI, thus incorporate a sound theoretical basis for the measurement of learning potential by tracking and analysing learning curves as well as processing tendencies in terms of 16 criteria or characteristics of cognitive functioning, to identify strengths, weaknesses and metacognitive awareness, all of which can be addressed developmentally.
  • Unlike some structured interviews the CPP and LOI do not reduce the complex concept of learning potential and cognitive modifiability to assumptions regarding age-based prediction of potential.
  • The cost involved in time intensive assessment center and structured interview techniques, is in the case of the CPP and LOI reduced by automated and online assessment and automated reporting.
  • In the case of the CPP and LOI, the focus is on the practical utility of the results for developmental, selection and placement requirements.
  • One of the key advantages of using the CPP and LOI as opposed to alternative methodologies of cognitive assessment, is the lack of adverse impact and cross-cultural bias. The CPP and LOI rely on several design features to ensure valid assessment across groups, including:
    • allowing test candidates to apply any of 15 different stylistic approaches to accommodate for personal and cultural preferences in problem solving approach;
    • not capitalising on right-or-wrong answers, instead focusing on meaningful conceptualisation, which is scored in terms of certain processing criteria;
    • not applying time limitations, in that cognitive speed and power are measured separately;
    • the activation of auditory, visual and kinesthetic modes of processing to accommodate for individual and group differences in processing approach;
    • the avoidance of decontextualised and disembedded item content to cater for test candidates from contextual language backgrounds;
    • the use of test-train-test techniques to gradually introduce unfamiliar task requirements;
    • providing interactive feedback on performance, to track learning curves;
    • utilizing unfamiliar task content to create equal opportunities for candidates from different educational and socio-economic backgrounds;
    • requiring only low level (grade 5 mother tongue) language proficiency for those who are not linguistically skilled;
    • not measuring grammar, spelling or sentence construction skills which are largely educationally developed.
  • The CPP and LOI have been researched in-depth and the results of validity, reliability and adverse impact studies are summarized in the research manuals.
  • The CPP results of adults in the work context are commonly used for purposes of career guidance, selection, placement, development and coaching, succession, identification and development of leadership potential, as well as organisational development. The LOI, aimed at the 16 – 30 age range, is used for purposes of career guidance, bursary allocation, fast tracking, developmental and selection purposes.

1.2 Assessment center methodologies

Assessment center methodologies represent the Contextualist approach to psychological research in that the focus is on the measurement of the competencies required for effective performance within specific knowledge and skill domains.

The use of a variety of customised assessment centers is becoming common practice within organisations aiming to determine technical skills, behavioural tendencies, managerial skill and leadership potential.

Assessment center evaluations largely focus on behavioural and/or conceptual performance within domain specific areas. A variety of techniques are involved including in-basket exercises, leaderless groups, interactive group exercises, skill-specific games and case studies as well as questionnaires. Candidates who are evaluated by means of these exercises are often, but not necessarily, observed live and in real time by raters. Responses can also be evaluated by manual scoring of open-ended questions or by automated means. These techniques can briefly be described as follows:

  • Group exercises mostly involve a small group of young professionals or managers who are required to perform a pre-defined task which involves collaboration, decision making and leadership, while being observed by raters in terms of certain performance criteria. These exercises may be fairly time consuming.
  • Virtual stylised simulations, in the form of video games that are built around specific business skills, are often used for the screening of young professional candidates. These assessments are popular in large organisations where the aim is to create talent pools.
  • Both online and directly observed in-basket exercises, capitalising on real life managerial challenges, are often performed to determine managerial and/or other skills. The scores of candidates with previous exposure to managerial requirements may therefore be elevated.
  • Role plays are often used to determine behavioural skills associated with sales or leadership performance. It may also form part of an interview.
  • Questionnaires in the form of assignments that are often completed in the candidate’s own time may be used to determine the managerial insight, decision making skill and procedural approach of candidates. Here, the validity of the results may be derailed in cases where test candidates obtain guidance and advice from others.

The benefits of assessment centre methodologies include adaptability to a variety of applications, including real and online games, interviews and questionnaires aimed at measuring different competencies and skill sets. In addition, assessment centres reflect real life work requirements and capabilities, which increase the predictive and face validity involved. Given the subjective nature of rater impressions, inter-rater reliability may pose challenges, though. To alleviate this problem the assessment criteria need to be clearly operationalised and specified in detail. Another challenge is related to the fact that the performance measured by assessment centres is largely affected by previous experience and therefore not suitable for the prediction of learning potential. The use of assessment centre methodology can also be expensive and time consuming. It is, however, widely used and relied on for purposes of selection, placement, succession and the development of people in the work environment.

1.3 Situational Judgement Tests (SJTs) and gaming for screening purposes

Situational Judgement Test or Inventories, abbreviated as STJs or STIs, have been around since the mid-1900’s. These assessments capitalise on realistic workplace scenario’s for recruitment, screening and selection purposes.

The construction of STJs rely on job analyses and the opinions of job experts as most of these assessments are tailor made in terms of particular work requirements. The test content of STJs can be presented through a variety of modalities, including video, audio and printed materials. The test items normally describe work challenges where certain responses need to be selected or prioritised. The goal is to evaluate the appropriateness of a person’s responses or judgements in certain work-related situations. Behavioural tendencies are also inferred as a basis for predicting a person’s role suitability. The test content often directly reflects role-related operational tasks and decisions. STJs are usually not timed.

There are a number of benefits to using STJ assessments. First and foremost, they reflect specific role requirements as the scenario’s specified in the assessment closely overlap with work-related tasks. This normally contributes to the predictive validity of assessment results. STJs can be used to assess a variety of competency constructs, using different techniques, and are relatively easy to develop, customise, administer and score. Online STJs are most appropriate for high volume screening purposes.

The use of STJs is, however, also criticised for certain shortcomings. For one, the value-add of an STJ entirely depends on the quality of the items or the specific test. These assessments are also most appropriate for selecting candidates for operational roles as opposed to strategic or creative roles. In addition, the scoring of STJs remain problematic given the absence of objective criteria for determining the best possible answers. Job experts may, for example, differ as to the most appropriate responses to a situation. In these instances, a consensual scoring approach is often used. The latter may, however, not necessarily appreciate the potential value add of unusual, creative, intuitive or complex logical approaches, though.

Adverse impact effects often characterise STJs given their experience-based, visual, cultural and socio-economic bias. In terms of the metric properties of STJs, it seems that they may lack what is referred to as content validity in that the work samples used as items mostly fail to represent the entire required knowledge and skills base involved. In addition, their content-specificity makes it difficult to investigate their metric properties – the test-retest reliability in particular.

STJ tests, games and simulation exercises overlap and represent assessment centre methodologies.

Gamification which often involves scenario-based items has become a popular screening technique in the recruitment of job candidates. Not only competencies and decision-making skills, but also conventional psychometric constructs related to personality and intellectual functioning are inferred from these techniques. The often quick and easy to use games are usually delivered on mobile and aimed at younger generations. Not only do these techniques access a wider audience for both candidates and employers, but the data can easily be filtered and matched to the competency requirements of work to improve placement decisions. Candidates who have technological skills and experience may well achieve better scores on gamified assessments, without any assurance that the skills measured would necessarily transfer to work-related performance.

Reassessment of the Cognitive Process Profile

By Cognadev on August 15, 2019


The validity of CPP results

The CPP capitalises on a person’s cognitive responses to new and unfamiliar information. A candidate’s first CPP results are thus usually the most valid, unless:

  • The candidate’s performance has been affected by emotional factors including extreme performance anxiety, stress, preoccupation and/or demotivation (note that a manageable degree of performance anxiety may even improve concentration and will thus not affect the validity of the report);
  • Physical factors related to excessive fatigue, medication, disability and/or pain, for example, have played a role;
  • The assessment took place under unfavourable assessment conditions, which may include noise, extreme temperatures, technological problems and/or other disturbances; and
  • A considerable period of time has elapsed since the previous CPP assessment, during which time the candidate may have developed further cognitive skills.

In the absence of the factors listed above, and in instances where the first CPP can be regarded as valid, a reassessment should be postponed by at least 5 years or more if possible. An exact time frame for valid reassessment is, however, difficult to specify.

At times it is, however, useful to re-administer the CPP to determine the impact that developmental initiatives, work exposure, maturity, changes in attitude, self-confidence and interests may have had on a candidate’s cognitive functioning. It may also be useful to reassess those whose existing CPP reports are of questionable validity.


What if there are several sets of CPP results available?

When several sets of CPP results are available for one candidate, qualitative interpretation by a skilled practitioner is required. Cognadev consultants can assist in this regard. Cognadev consultants normally also link the most valid set of results to the client’s account. Accredited practitioners may, however, request access to all the various sets of results of a particular candidate.


The potential impact of CPP reassessment

Certain processing scores are more easily affected by reassessment than others. The “Learning”, “Speed”, “Judgement” and “Memory” scores of a second or third set of CPP results are often somewhat elevated – but not for all candidates. Other dimensions are, however, more resistant to change. These include the “Potential level of work” indication, as well as the “Units of information”, “Complexity” and “Integration” scores.

It seems that candidates who prefer familiarity are more likely to obtain higher scores when reassessed, whereas candidates who tend to seek cognitive challenge and who achieved strategic profiles with a first CPP assessment, may not find a repeated exposure to the task as engaging. The latter candidates may, therefore, not apply themselves as rigorously as they initially did. This may result in somewhat lower reassessment scores for such candidates. Their first set of CPP results, thus, remain the most valid.


Statistical evaluation of CPP reassessment

Even though reassessment may affect the validity of the assessment results, the CPP test-retest reliability studies on a homogeneous sample of n = 87 and heterogeneous samples of n = 2724 and n = 475, have indicated test-retest Gower similarity indices of around 0.7 to 0.9 for the cognitive styles, processing competencies and level of work results. These reliability studies are reported on in more depth in the CPP Technical Manual. To read up more on the evidence-based research on the CPP, please take a look at Cognadev’s Technical Report Series.

Strategies to develop analytical and strategic thinking

By Paul Barrett on June 26, 2019


This blog follows on the principles of organisational and individual learning entry which focused on the different ways in which organisations can extend the traditional concept of learning by implementing principles of “learning organisations”(See the link to navigate to this blog included here: https://www.cognadev.com/principles-of-organisational-and-individual-learning). In this previous entry, it was found that organisations can enhance learning by focusing on the development of employees’ cognitive thinking processes. 

In order to influence the cognitive development of employees within an organisation, it is necessary to start by assessing the current and potential cognitive processing capabilities and preferences of individuals and their teams. Two of the assessment techniques available for this purpose are the Cognitive Process Profile (CPP) and the Learning Orientation Index (LOI) as provided by Cognadev. Both these tools offer automated simulation exercises which operationalise, externalise and track thinking processes according to thousands of measurement points. The results are algorithmically interpreted by expert systems and comprehensive reports on an individual’s cognitive functioning and metacognitive awareness are generated. Metacognition refers to an awareness of one’s own thinking processes as well as the use of certain metacognitive guidelines or criteria to direct and evaluate these thinking processes.

The cognitive requirements of the organisational work environment may also be assessed by means of the Contextualised Competency Mapping (CCM) tool, which allows for person-job and team-job matching to guide learning and development initiatives. The CCM tool is also provided by Cognadev.

All of Cognadev’s cognitive assessments and training initiatives are based on a theoretical model of thinking processes. More so, the cognitive developmental initiative itself is guided by the application of a particular methodology anchored in metacognitive awareness.


The theoretical foundation for the Cognitive Process Profile (CPP) and the Learning Orientation Index (LOI):

The Information Processing Model (IPM) is the theoretical model on which both CPP and LOI assessments are based. It can be represented as follows, whereby the various cognitive processes are organised holonically, and guided from a metacognitive perspective.


The Information Processing Model (IPM):Metacognitive Criteria Illustration


The following metacognitive criteria guide the application of all processing skills:

Metacognitive Criteria Illustration


Taking account of the above, the development of both analytical and strategic thinking skills will briefly be explained here.

Additionally, once the strengths and development areas, stylistic preferences and the complexity capabilities of candidates have been assessed, suitable cognitive development programmes can be planned and implemented. These should ideally involve facilitated group exercises which capitalise on real, job-related cognitive challenges.


The importance of metacognition in developing cognitive skills:

Analytical thinking and Systems / Strategic thinking skills courses should ideally entail two to three days of intensive facilitation. This should also be followed up in the longer term in collaboration with peers and mentors from the work environment. Besides, as these two thinking skills programmes are of a purely cognitive focus, a broader perspective to develop emotional and consciousness factors in addition to these cognitive skills should also be offered. This will accommodate for the highly integrated nature of human functioning. Most participants are able to embrace these broader principles with relative ease, whereas the development of advanced cognitive skills requires long term practice aimed at the internalisation of metacognitive criteria.


1. The development of analytical thinking skills:

Analytical skills are particularly useful for dealing with the challenges of operational environments. Due to the dynamics and fuzziness of practical situations, everyday problem-solving challenges are often vastly more complex than the available theoretical models. Analytical problem-solving skills are thus required. The good news is that these can be developed with relative ease.

Specifically, analytical skills development entails the activation of metacognitive guidelines to deal with domain specific, job-related, problem-solving challenges. A methodology aimed at transferring and internalising specific metacognitive criteria is recommended.

The learning outcomes of analytical skills development normally includes the following capabilities, including the ability to:

  • Explore situations in terms of the metacognitive criteria of relevance and clarity;
  • Establish an appropriate level of detail versus generality at which to deal with a task;
  • Work with the required detail and precision;
  • Compare various task-related aspects spontaneously;
  • Identify and apply rules systematically;
  • Differentiate between elements;
  • Identify relationships and link related aspects;
  • Order and structure information coherently to make sense;
  • Integrate new and discrepant information into existing frameworks;
  • Restructure maps and models to accommodate emerging requirements;
  • Contextualise own solutions, models or ideas appropriately; and
  • Show metacognitive awareness of all of the above cognitive processes.


Analytical skills training thus involves the assessment of each candidate’s current cognitive preferences and capabilities. Their metacognitive awareness is of particular interest. Once these profiles have been investigated, strengths and development areas in terms of metacognitive awareness can be identified at the individual and team level. A metacognitive “voice”, which best reflects a person’s strength, can then be allocated to that individual member. For example, one person may take on the role of “The voice of relevance”, another “The voice of coherence”, or “The voice of clarity” (refer to the IPM).


Furthermore, analytical skills training sessions should be facilitated by a trained professional who understands the dynamics involved in cognitive functioning. Exercises in these sessions centre around dealing with real job-related challenges. Typically, during this facilitated process, those holding certain metacognitive “voices” have the authority to interrupt the group’s problem-solving process at any time, in order to create an awareness of a neglected metacognitive criterion. The facilitator’s role will, therefore, include to capitalise on such insights as well as to reinforce the internalisation of specific metacognitive criteria or “voices”.

The full training process is thus aimed at the “automation” or internalisation of metacognitive criteria by participants to guide their thinking processes in future. Once all group members have practised their metacognitive strengths, the roles are reversed so that they take on their underdeveloped metacognitive “voices”. This facilitated workshop should be followed up with applicable projects and include the involvement of mentors or peers to further consolidate the metacognitive skills acquisition of each person.

Whereas analytical thinking involves a focus on the constituent parts of systems, strategic / systems thinking processes reverse the relationships between the parts and the whole. In other words, in the case of systems thinking, the parts can only be understood in terms of the dynamics of the whole.


2. Building strategic / systems thinking capabilities:

Systems thinking remains a critical prerequisite for the strategic viability of organisations. This skill is unfortunately seldom taught within educational and training contexts, where the focus is mostly on the transfer of knowledge and skills. Unless systems / strategic thinking awareness is cultivated within an organisation, the principles of learning organisations will not be adopted by a critical mass of people. This may result in the gradual deterioration of learning values and ideals into mere ‘window dressing’.

According to a systems approach, the world is regarded as an integrated, dynamic whole, or a series of nested sub-systems forming a holon. In agreement with Ken Wilber (2007), who popularised Arthur Koestler’s concept of holons, the entire universe is organised such that subsequent systems levels include and transcend their predecessors.

The development of systems thinking skills to optimise strategic thinking is thus aimed at the acquisition of integrative, intuitive and holistic cognitive capabilities. As in the case of analytical thinking training, pre-assessment of delegates’ cognitive capacity and preferences will guide the specific design of a systems thinking development initiative.

A Systems / Strategic thinking course normally involves a focus on real organisational challenges in terms of four phases of analysis and conceptualisation, namely:

  • An in-depth investigation of the strategic challenges of the organisation to generate ideas, externalise positions and polarise perspectives. The complex information generated is then analysed according to a typical, matrix-like technique (Kepner-Tregeo, 1997), using unique and appropriate evaluation criteria. At the end of the first phase, most participants will fully understand the strategic challenges faced. These challenges can then be defined and metaphors selected to represent their underlying dynamics.
  • The next phase involves a dynamic analysis of the root causes of these strategic challenges, and the identification of leverage points and catalysts of change. Here the definition of strategic challenge is analysed in terms of the four most appropriate archetypes as proposed by Peter Senge (1990).
  • The following step involves creative strategy formulation, whereby unusual techniques are used to stimulate creativity. Initially, this is often met with resistance from rationally trained executives and managers. However, in retrospect, these creative exercises are often regarded as the most liberating and enlightening aspect of the course. Various intuitive techniques are practised – most of which have been proposed by the intuitive Sue Mehrtens (2002).
  • Consolidation of the entire process takes place by evaluating the extent to which the formulated creative strategy – aimed at dealing with the organisation’s strategic challenges – can be implemented and contextualised. Factors such as organisational structure, processes, power dynamics and measurement options to track intangible progress, are considered.


Thus, to summarise, in the case of analytical thinking, elements are isolated, the nature of interrelationships are investigated, a detailed and precise approach followed, single variables are modified, and a linear or structured approach applied. Systems thinking, on the other hand, involves unifying interrelationships, investigating the effects of interaction, considering groups of variables, and is best suited to dealing with vague, fuzzy, complex, and dynamic information. Systems thinking focuses on purposes and objectives and considers the impact of the duration of time.

Nevertheless, the analytical approach is often criticised for its somewhat reductionist and factual nature, and the systems approach, characterised by holistic thinking, thereby emphasise the different sides of the proverbial coin. Both are equally valuable components of individual and organisational learning and performance.  Ultimately, all learning is rooted in culture and values – constructs that are best described by the construct of consciousness.  Organisational and individual learning initiatives should, therefore, be approached in an integrative manner, by simultaneously taking into consideration the impact of cognitive preferences and the valuing systems or cultural memes rooted in consciousness.


An in-depth description of the above mentioned cognitive training processes can be found in http://integralleadershipreview.com/15984-15984/.





Beck, DE & Cowan, CC (2005). The Spiral Dynamics: mastering values, leadership and change. Wiley-Blackwell.

Kepner, CH & Tregeo, BB, (1997). The New Rational Manager. Princeton Research Press.

Mehrtens, Sue E (2002). Intuition overview. Unpublished article.

Mindell, Arnold (2010). Process Mind: A User’s Guide to Connecting with the Mind of God. Quest Books. Theosophic Publishing House, Wheaton IL.

Prinsloo, M. & Prinsloo, R. (2018). The assessment and development of Analytical and Systems thinking skills in the work environment. Integral Leadership Review, November. http://integralleadershipreview.com/15984-15984/

Prinsloo, M & Barrett P (2013). Cognition: Theory, measurement, implications. The Integral Leadership Review, June. http://integralleadershipreview.com/9270-cognition-theory-measurement-implications/

Prinsloo, M (2012). Consciousness models in action: Comparisons. The Integral Leadership Review, June http://integralleadershipreview.com/7166-consciousness-models-in-action-comparisons/

Senge, Peter (1990). The Fifth Discipline (2nd Edition). Cornerstone Import.

Wilber, Ken (2007). The Integral Vision. Shambhala Publications: Boston.


Understanding Human Consciousness: Theory and Application

Maretha Prinsloo
October 08, 2018

The study of consciousness attracts the attention of psychologists, philosophers and scientists. It is, however, mostly dealt with in a descriptive and speculative manner, without explaining the nature of the subjective experience and the dynamics involved.
For the full article click the icon below.
Equifinal profiling

Towards an Integrated Assessment of Leadership Potential

M Prinsloo
August 15, 2018

This paper focuses on the assessment of leadership potential in terms of a number of related philosophical, theoretical, and technical considerations. A critical evaluation of current assessment practice is followed by the introduction of alternative assessment methodologies and techniques aimed at measuring consciousness, cognition, and motivation. Practical guidelines for integrated and holistic leadership assessment, as well as the future of assessment, are also addressed.
Introduction The issue of leadership is central to the practice of industrial psychology and psychometrics, the purpose of which include realising human potential and transforming counter-productive cultural patterns in order to enhance sustainability, integration, and evolution within the realm of organisational and other social systems. Leadership research includes a focus on the individual (for purposes of personal development); an organisational orientation (to enhance performance and value add in the work environment); or an existential-philosophical perspective (focused on the evolution of consciousness). The aim of this paper is to contextualise the construct of leadership potential in terms of complexity, collective consciousness, and personal traits. Factors related to cognition, levels of consciousness, and motivation are integrated in terms of a Jungian perspective based on the work of Mindell in particular. Given the shortcomings of current psychometric offerings, alternative assessment methodologies and techniques are proposed for the measurement of the following:
  • cognition by the Cognitive Process Profile (CPP);
  • levels of consciousness by the Value Orientations (VO); and
  • motivational factors by means of the Motivational Profile (MP).
This paper is the first in a series of four on leadership, including:
  • this discussion of leadership assessment solutions;
  • a second article describing a theoretical model of cognitive processing;
  • a third contribution proposing an integrative theoretical framework of levels of consciousness; and
  • a fourth paper explaining the development of consciousness and cognition within the leadership context.
These four aspects represent a holistic perspective on the assessment and development of leadership potential.
For the full article click the icon below.
Equifinal profiling

Technical Report Series

P Barrett
August 15, 2018

Technical Report Series These articles provide detailed expositions of analyses undertaken as part of the evidence-base supporting Cognadev products. Each contains an executive summary, along with the logic, analysis-details, results, and critical evaluations supporting the statements made in that summary.

  • Evaluating the relationships between 4 subscales and the Full IQ scales of Sigma Assessment System's Multidimensional Aptitute Battery (MAB)
  • Evaluating the relationships between the Abstract Reasoning subscale of Psytech International's General Reasoning Test Battery (GRT2) and a range of CPP attributes which appear in/contribute to the CPP assessment report. The sample numbered 259 South African employment candidates assessed within a recruitment process by an organizational development consultancy. 143 cases completed the CPP, with 138 cases with complete data on both the CPP and GRT2 Abstract Reasoning scale.
  • Evaluating the relationships between the Verbal and Numerical subscales of Psytech International's Critical Reasoning Test Battery (CRTB2) and a range of CPP attributes which appear in/contribute to the CPP assessment report. The sample numbered 128 South African employment candidates assessed within a recruitment process by an HR consultancy.

For the full article in .pdf click the icon

Technical Report 1

  • The retest reliability of the VO was assessed, where the test results to be compared are two ordered-category sequences of selected and rejected orientations, consisting of up to three orientations per sequence (Accepted Values) and one or two orientations (Rejected Values). A new computational comparison analysis algorithm was constructed to work with ordered category sequences, generating a percentage match index varying between 0% (no agreement) to 100% (absolute identity).

For the full article in .pdf click the icon

Technical Report 2

  • Using the VO Technical Manual mixed-gender sample dataset of n=3,683 cases, four homogeneity (reliability) coefficients were computed for every values orientation "scale"

For the full article in .pdf click the icon

Technical Report 3

  • Analysing and reporting upon the relationships between VO selected and rejected values orientations, MBTI personality scores & types, Belbin ranked team types, and CPP attribute-scores, levels of work, and ranked cognitive styles.

For the full article in .pdf click the icon

Technical Report 4

    Investigating short, medium, and long-term (> 5 years) retest reliability using two samples of data:
  • 87 students undertaking an Accounting degree course at a South African University
  • 2,724 respondents comprised primarily of job applicants who had completed the CPP on two separate occasions, but also included some students, and attendees at CPP training courses.

For the full article in .pdf click the icon

Technical Report 5

  • Does a small deviation r-square value have any pragmatic value at all?
  • What magnitude of deviation r-square is worth reporting beyond: nothing to see here?

For the full article in .pdf click the icon

Technical Report 5

  • This paper investigates how the cost of implementing the Q12 Employee Engagement assessment may be critically evaluated in terms of calculating the likelihood of making or losing money as a corporate-wide Q12 score is increased. i.e. The question posed and answered via computational simulation is "what are the odds of a company making or losing money as a result of an increase in score from 36 (average) to something higher".

For the full article in .pdf click the icon

Technical Report 5

  • This study is aimed at answering the question of whether the CPP results of individuals tend to match the complexity requirements of their work as indicated by the Stratified Systems Theory (SST).

For the full article in .pdf click the icon

Technical Report 5

  • This study investigates the relationship between: - Cognadev’s Cognitive Process Profile (CPP) - Cognadev’s Values Orientations (VO) - Bar-On’s Emotional Quotient Inventory (EQi)

For the full article in .pdf click the icon

Technical Report 5

  • This study investigates the nature of the proposed holonic model of information processing constructs on which the Cognitive Process Profile (CPP) is based, and what distinguishes low and high information processing competency groups in terms of their preferred cognitive styles, functional area of employment, and educational qualifications.

For the full article in .pdf click the icon

Technical Report 10

  • In this investigation, we look at how preferred cognitive styles vary over different job families and age-groups, equated on their educational level. A sample of the most recently acquired 60,572 cases of CPP data were used, subdivided into four age groups (20-29, 30-39, 40-49, 50 and above). We computed the median ranked style for each of the 14 CPP cognitive styles, within each age-group, across 10 job families.

For the full article in .pdf click the icon

Technical Report 11

  • This investigation examines the variation in scores on the 14 CPP Information processing competencies (IPCs) as a function of various current employment categories, age at CPP completion, and highest attained educational level, using a sample of the most recently acquired 60,572 cases of CPP data.

For the full article in .pdf click the icon

Technical Report 12

13. THE BINOMIAL EFFECT SIZE DISPLAY (BESD) Is this always an accurate index of effect?
  • I provide the definition, some warnings of the conditions under which it may not always produce accurate results, and some worked examples demonstrating those conditions. Overall, I think it’s a pretty good ‘quick approximation’ … but it is no substitute to having all the data at hand to calculate the actual effect/accuracy implied by a correlation/validity coefficient.

For the full article in .pdf click the icon

Technical Report 13

Two articles support the content of this technical report: 

Rosenthal, R., & Rubin, D.R. (1982) A simple, general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74, 2, 166-169.

For the full article in .pdf click the icon

A simple, general purpose display of magnitude of experimental effect.

Hsu, L.M. (2004) Biases of success rate differences shown in Binomial Effect Size Displays. Psychological Methods, 9, 2, 183-197.

For the full article in .pdf click the icon

Biases of success rate differences shown in Binomial Effect Size Displays.

14. EQUIFINAL PROFILING I was asked this question recently by an executive responsible for hiring in a large corporate:
  • “We observe too often that people with seemingly disparate profiles can excel in the same role. What type of analyses can someone do with a big dataset of predictors and criteria to determine whether multiple "profiles" can predict success? It seems that traditional model approaches can’t do this as they just create a single ‘average’ profile or solution.”

For the full article in .pdf click the icon

Technical Report 14