Managing response bias in personality assessment: introducing an alternative to flawed ‘social desirability’ scales

Image by Sanna Jågas from Pixabay 

Managing response bias in personality assessment: introducing an alternative to flawed ‘social desirability’ scales

Most of us, at one point or another, have taken a personality test. It could have been a trait-based assessment using the big-5 personality traits, or a type-based assessment like the popular Myers-Briggs Type Indicator test or just a pointless ‘which superhero are you like?’ quiz that a friend shared with you on Facebook back in the day. It could have been completed as part of a job application process, team development, for academic purposes or just out of sheer curiosity. If you think back to it, do you happen to recall whether you tried to answer the test as honestly as you could or whether you understood exactly why a particular question was being asked and altered your answer accordingly? If you can relate to the latter, which I know I’ve been guilty of on many occasions, then that is something that falls under the general term of response bias.

What is social desirability?

Response bias in itself is an umbrella term comprising of various types of biases, one of which is social desirability, which is a major concern in relevance to self-report questionaries. Social desirability has been defined as “the tendency to endorse items in response to social or normative pressures instead of providing veridical self-reports” (Ellingson, et al. 2001, p. 122). While initially it was believed to be a unidimensional construct, according to the most recent research of Paulhus (2002), social desirability has two dimensions namely impression management and self-deception. Impression management refers to an intentional adaptation of their image by an individual to be viewed by others in a favourable light while self-deception is an unintentional favourable misrepresentation of self i.e. people positively believe in how they have described themselves, however, it is not accurate. The key effect of social desirability is that it tends to increase a candidate score in areas that are either positively related to job performance or the areas that candidates believe to be positively related with job performance while reducing scores in areas that are or are believed to be negatively related with job performance.

The effects of social desirability are a genuine concern in relevance to personnel management and rightly so as it is backed by research conducted in multiple countries, the results of which indicate that job applicants actually do intentionally distort their responses on personality tests in comparison to non-applicants (Birkeland et al., 2006).

Detecting response bias

Numerous Social Desirability measures/scales have been developed to detect such possible distortions in order to more accurately assess personality. In a review of personality inventories used in candidate selection, Goffin and Christiansen (2003) found that 85% of such personality inventories included a measure for social desirability and while 2 of the more commonly personality inventories include a mechanical “correction” to trait scores based on an elevated SD score, the vast majority of inventories did not include a mechanical correction. These measures test for distortion in responses by using items where, in a normative sample, the desirable response is relatively infrequent. Multiple infrequent and desirable responses result in a higher score, which is taken as an indication of distortion. This is then followed by either subjectively or mechanically (using a mathematical formula) adjusting the scores of an individual on the personality inventory.

Several concerns have been raised over the years concerning the use of Social Desirability Scales. One of which, as noted by Ones and colleagues (1998), is that when the scores of a personality inventory are ‘adjusted’, it results in the modification of the construct validity of the inventory. In other words, the adjusted scores may fail to correspond to the respondent’s actual personality characteristics. In the case of subjectively adjusting the scores of a personality inventory, the approach is also nonviable depending on the size of the candidate pool as it requires an individual examination of each profile for adjustment.

The problem with social desirability scales correlating with traits

Another major concern with using social desirability scales is their corelation with the traits being measured. Ideally, it should be absolutely unrelated to any of the traits being measured so that a high score on the scale can only be indicative of distortion. If such a correlation exists, it becomes unclear as to what a high social desirability score indicates. Has there been an intentional attempt to distort responses or does the respondent actually have a stronger trait in the direction of the correlation? A meta-analysis (Ones et al., 1996) of the correlation between the five-factor model and social desirability scores found positive correlations with agreeableness, conscientiousness and emotional stability. This is in line with findings from Mosaic Assessments own recent research in which social desirability items correlated with the big-5 personality traits. What this means is that the questions did not identify social desirability.

Specifically, in the context of personnel management, this becomes more problematic when there is a positive correlation between the social desirability score and the traits that tend to be positively related with the job. In this instance, respondents who would actually have the right personality traits for the particular job profile could end up being flagged for distortion due to a high social desirability score. On the other hand, distortion of responses due to social desirability is a genuine headache for any recruiting team in particular.  In addition, for coaches and learning professionals who are trying to help leaders and employees to develop their personal skills may be faced with an ‘inaccurate’ picture of the individual. In other words, it seems you can’t live without social desirability scales but you can’t live with them either!

Is the answer ipsative items?

A different approach is to use scales comprised of forced-choice items (ipsative measures) where a participant is presented with groups of 4 statements about personality and are required to select a statement that describes them the best and one which is the least appropriate. The basic assumption of forced choice measures is that if the items with similar social desirability are grouped, then the final response of an individual will not have been affected by social desirability and will be a better indicator of their personality characteristics. However, findings from recent research (Christiansen et al., 2005) suggest that forced-choice scales are just as susceptible to distortion as normative scales. Additionally, over the years, concerns about the limitations of this approach in relevance psychometric properties, reliability and questions about whether or not this approach can be used to draw a comparison between individuals have been raised (Meade, 2004; Johnson et al., 1988).

The Alternative – Self-Perception!

As Mosaic Personality Tasks collects both self-report data through a short questionnaire and objective personality data through online tasks, they are able to offer a unique solution to this problem.  Instead of relying on a social desirability scale in the questionnaire, Mosaic identifies a “self-perception” score by comparing how the individual has scored on Mosaic’s objective behavioural tasks relative to their personality questionnaire answers. As well as an ‘overall’ self-perception score individuals are also alerted to particular facets where there is a discrepancy between the two scores.

Mixing tasks and self-report
Image showing how self report score are combined with task scores

This “self-perception” score can be “low” i.e. the person has consistently rated their personality attributes lower on their self-report questionnaire than the behavioural tasks suggest is really the case. In other words, they have been too self-critical when answering the questionnaire. In a recruitment setting, they may tend to undersell themselves and potentially miss out on opportunities. In a development setting, perhaps the person has more to offer than they realise. I’m sure we all know people who underplay themselves and this ‘low’ self-perception score is potentially critical information in helping to challenge this issue and to help in identifying potential competency strengths.

Conversely, this score can be “high” i.e. the person has consistently rated their personality attributes higher on their questionnaire than the behavioural tasks suggest is really the case. In other words, they have oversold themselves, either deliberately or perhaps they genuinely just see themselves that way. In a recruitment setting in particular this is key, and offers a stronger and more robust alternative to the usual social desirability scale option. With this score identified, interviewers are armed with a much clearer understanding of the questions they need to ask and the specific personality facets to probe more thoroughly on. In extreme cases interviewers may want to triangulate with other sources of evidence such as references.  In a development setting, the ‘self-perception’ score opens up a whole realm of opportunity for discussion and understanding.

In common with social desirability scales, this self-perception measure only “flags up” those scoring at the extremes. Over 90% of people score somewhere in the middle: rating themselves neither too highly nor too self-critically.  Those scoring right in the middle perhaps know themselves very well indeed.  Of course, on any particular personality scale, there can still be a difference, a blind spot in relation to just that particular personality attribute E.g. seeing oneself as more selflessly helpful than is really the case.

————–

This article was written by Prithvi Godi who was in his final year of Psychology at Stirling University. Prithvi has a strong interest in Occupational Psychology and has been on a student placement at Mosaic Assessments Ltd. Please connect with him on linkedin.

You can find out more about Mosaic Personality Tasks at www.mosaictasks.com. If you are interested in reading more articles on personality as well as receiving updates on Mosaic Personality Tasks please sign up to our mailing list below.

Subscribe below to receive updates on our personality articles and news

* indicates required

 


 


 


 

 

Please tick here to confirm that you are happy to hear from Mosaic Assessments about updates, blogs and other relevant information.

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.

 

References

Birkeland, S.A., Manson, T.M., Kisamore, J.L., Brannick, M.T. and Smith, M.A., 2006. A meta‐analytic investigation of job applicant faking on personality measures. International Journal of Selection and Assessment14(4), pp.317-335.

Christiansen, N. D., Burns, G. N., & Montgomery, G. E. (2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance18(3), 267-307.

Ellingson, J. E., Smith, D. B., & Sackett, P. R. (2001). Investigating the influence of social desirability on personality factor structure. Journal of Applied Psychology86(1), 122.

Goffin, R. D., & Christiansen, N. D. (2003). Correcting personality tests for faking: A review of popular personality tests and an initial survey of researchers. International Journal of Selection and assessment11(4), 340-344.

Johnson, C. E., Wood, R., & Blinkhorn, S. F. (1988). Spuriouser and spuriouser: The use of ipsative personality tests. Journal of Occupational Psychology61(2), 153-162.

Meade, A. W. (2004). Psychometric problems and issues involved with creating and using ipsative measures for selection. Journal of Occupational and Organizational Psychology77(4), 531-551.

Ones, D. S., Viswesvaran, C., & Reiss, A. D. (1996). Role of social desirability in personality testing for personnel selection: The red herring. Journal of applied psychology81(6), 660.

Paulhus, D. L. (2002). Socially desirable responding: The evolution of a construct. The role of constructs in psychological and educational measurement49459.

Managing response bias in personality assessment: introducing an alternative to flawed ‘social desirability’ scales Read More »