
Словари и журналы / Психологические журналы / p77Journal of Occupational and Organizational Psychology
.pdf
77
Journal of Occupational and Organizational Psychology (2002), 75, 77–86
© 2002 The British Psychological Society
www.bps.org.uk
Using a single-item approach to measure facet job satisfaction
Mark S. Nagy*
Radford University, Radford, Virginia, USA
This study builds on the work of Wanous, Reichers, and Hudy (1997) by investigating the use of a single-item approach measuring facet satisfaction. Participants consisted of 207 employees from a variety of organizations who completed a job satisfaction survey containing the Job Descriptive Index (JDI) as well as a single-item which also measured each of Ž ve JDI facets. Results indicated that the single-item facet measure was signiŽ cantly correlated with each of the JDI facets (correlations ranged from .60 to .72). Results also indicated that the single-item approach compared favourably to the JDI and in some cases accounted for incremental variance in self-reported job performance and intentions to turnover. Implications include the notions that single-item measures may be easier and take less time to complete, may be less expensive, may contain more face validity, and may be more
• exible than multiple-item scales measuring facet satisfaction.
Recently, Wanous, Reichers, and Hudy (1997) published a paper demonstrating that single-item measures of overall job satisfaction correlated quite highly with multipleitem (or scale) measures of overall job satisfaction (uncorrected correlation of .63; corrected only for reliability, r=.67). Moreover, Wanous et al. concluded that singleitem measures of overall satisfaction ‘‘are more robust than the scale measures of overall job satisfaction’’ (p. 250). Finally, Wanous et al. listed several conditions when a single-item measure of overall satisfaction may be preferable to scale measures of overall job satisfaction, including: (1) single-item measures usually take less space than scale measures, (2) single-item measures may be more cost-effective, (3) single-item measures may contain more face validity, especially when an organization has poor employee relations (due to negative reactions to perceived repetitious questions from scale measures), and (4) single-item measures may be better to measure changes in job satisfaction. Wanous et al. end their paper by suggesting that future research should explore the validity and reliability of other single-item measures.
The purpose of this paper is to extend Wanous et al.’s (1997) research by investigating single-item measures evaluating facet satisfaction. In essence, this paper will compare a single-item measure of facet satisfaction with a multiple-item measure of facet satisfaction. Moreover, this paper will also examine the empirical relationships produced by both the single-item and the multiple-item approaches when examining absenteeism, intentions to turnover, and self-reported job performance. It is hoped
*Requests for reprints should be addressed to Mark S. Nagy, Department of Psychology, 3800 Victory Parkway, Xavier University, Cincinnati, OH, 45207-6511, USA (e-mail: nagyms@xu.edu).
78 Mark S. Nagy
that such an investigation will provide evidence for the viability of adapting the single-item approach to the measurement of facet satisfaction.
The beneŽ ts of using a single-item facet measure
In a review of overall measures of job satisfaction, Scarpello and Campbell (1983) concluded that the best global rating of job satisfaction is a one-item, 5-point scale that simply asks, ‘‘Overall, how satisfied are you with your job?’’ Scarpello and Campbell and others believe that a single item measuring overall satisfaction is superior to summing up facet sales because multiple-item facet scales may neglect some components of a job that are important to an employee (Ironson, Smith, Brannick, Gibson, & Paul, 1989; Scarpello & Campbell, 1983; Wanous et al., 1997). For example, simply adding up facets to arrive at an overall index of job satisfaction may exclude important information about other areas of an employee’s job that may be very instrumental in determining his/her overall job satisfaction (such as satisfaction with working conditions or satisfaction with health benefits). In much the same way, multiple-item questions in a facet scale may contain several ‘sub-facets’, and simply adding up these sub-facets may result in an exclusion of sub-facets that are important in determining an employee’s overall level of facet satisfaction. Evidence of such sub-facets is provided by Heneman and Schwab (1985), who revealed that the facet of pay satisfaction may consist of five sub-facets, or specific areas, of pay.
In addition, Scarpello and Campbell (1983) also noted that summing up facets not important to an employee’s overall satisfaction will lead to misleading conclusions about that employee’s overall satisfaction level. For example, if satisfaction with opportunities for promotion is not important to an employee, yet is used to compute an index of the employee’s overall job satisfaction, then the obtained overall satisfaction value would be inaccurate. Likewise, facet scales may include aspects of a job that are not important to an employee. For instance, if the Job Descriptive Index (JDI) pay facet item ‘‘income provides luxuries’’ is not important to the employee, but is used to arrive at an index of the employee’s pay satisfaction, then the level of facet satisfaction obtained would also be misleading.
To be fair, it must be pointed out that the items in most multiple-item scales have undergone a tremendous amount of research that has sought to justify and validate the items contained in the scales. Moreover, many of the items on multiple-item scales probably do represent many of the aspects that individuals consider when evaluating their satisfaction with a particular facet. However, it seems highly unlikely that all of the items in any given multiple-item scale will represent all areas of a particular facet for every employee. In other words, it is extremely likely that there are individual differences among employees that help to determine their satisfaction with a particular facet. Consequently, it appears as though multiple-item measures may produce an ‘incomplete’ evaluation of an employee’s facet satisfaction.
However, this shortcoming of multiple-item scales can be easily rectified, for instance, by asking an employee to rate his/her pay using a single-item approach. An example of such a question could be, ‘‘How does your amount of pay compare to what you think it should be?’’ This approach would allow the employee to consider all relevant aspects of his/her pay regardless of whether the factor is typically included or excluded in a multiple-item scale. For example, the nine items measuring pay satisfaction on the JDI include descriptions such as: inadequate income, fairness, barely liveable, bad, provides luxuries, insecure, less than deserved, well paid, and
A single-item approach to facet satisfaction |
79 |
underpaid. However, if an employee considered ‘schedule of payment’ when evaluating his/her pay satisfaction, this factor might not be tapped by the JDI (or most other multiple-item measures of pay satisfaction). Yet, using a single-item measure allows an employee to consider any and all aspects of their satisfaction when evaluating a particular facet (such as pay), including any unique aspects such as ‘schedule of payment’. Consequently, a single-item measure allows for individual preferences in the facet being measured and, as a result, provides a more complete picture of that particular employee’s facet satisfaction.
In addition, using a single-item approach may yield a number of ‘non-psychometric’ advantages. One of these advantages is that a single-item measure is much shorter than a multiple-item measure. Multiple-item measures typically use a large number of items to measure a small number of facets. Yet, single-item measures can evaluate facets with just one question per facet. This advantage is important because a shorter survey is more likely to receive approval by an administrator and is more likely to be completed by an employee (Wanous et al., 1997).
Yet another advantage of single-item measures is that the format is much more efficient than multiple-item measures. According to Wanous et al. (1997), single-item measures take up less space, are more cost-effective, and may contain more face validity than multiple-item measures. There is no reason to believe that these advantages would not pertain to single-item measures of facet satisfaction. A single-item measure of facet satisfaction would certainly be shorter in length than multiple-item measures, and thus they would require less time to complete and consequently be more cost-effective when requiring employees to take time out of their workday to complete a survey.
Afinal non-psychometric advantage of single-item measures is that they can be easily altered to meausre just about any facet, particularly for those facets that are not measured using multiple-item measures. For instance, one facet that may be of concern in a union environment is job security. Unfortunately, there are no multiple-item measures that evaluate job security satisfaction. However, single-item measures can be accommodated to measure this relatively unexplored facet of job satisfaction. Using the single-item format presented in this paper, one can easily create a new measure for job security (e.g. ‘‘How does the amount of job security that you have compare to what it should be?’’). Unfortunately, multiple-item facet scales do not have this capability; if one is interested in job security satisfaction, one cannot easily alter a multipleitem measure. Instead, one would have to conduct a validation study testing the validity of a multitude of items in order to eventually arrive at a number of items that would be logically related to job security satisfaction. As a result, single-item measures have the added benefit of allowing practitioners the capability to explore nontraditional facets that are typically not investigated in traditional multiple-item measures.
Measurement issues of single-item measures
Of course, as pointed out by Wanous et al. (1997), single-item measures have also received their share of criticism. Specifically, single-item measures have been criticized because such measures cannot yield estimates of internal consistency reliability, nor can single-item measures be used in structural equation models (Wanous et al., 1997). Yet, most researchers would agree that the primary concern when evaluating a measure of any construct should be, first and foremost, the extent to which the
80 Mark S. Nagy
instrument represents and measures the construct. Although obtaining an index of reliability can be important in helping one to evaluate the validity of an instrument, it seems that having an instrument that is more inclusive of the construct is even more important.
Another criticism of using a single-item approach is that single-item estimates tend to have moderate correlations with scale measures (Wanous et al., 1997). However, Wanous et al. demonstrated that single-item measures of overall satisfaction appear to be correlated fairly well with multiple-item measures of overall satisfaction (minimum estimated correlation was .63). Based on this research, there is no reason to believe that single-item measures of facet satisfaction will yield much lower correlations with multiple-item measures of facet satisfaction. In short, it appears that previous criticisms of single-item measures may not be warranted.
The present study
Based on the notion that facet satisfaction can be measured using a single-item approach, the present study sought to essentially extend Wanous et al.’s (1997) findings regarding global satisfaction by comparing single-item measures of facet satisfaction to multiple-item measures of facet satisfaction. Hence, the following hypothesis is proposed:
Hypothesis 1: Each single-item measure of facet satisfaction will be significantly correlated with the appropriate multiple-item measure of facet job satisfaction.
Furthermore, in order to extend Wanous et al.’s (1997) research, this study sought to demonstrate that a single-item approach to measuring facet satisfaction would account for incremental variance in job-related behaviours above and beyond that of a multiple-item facet measure. Consequently, the following hypotheses are proposed:
Hypothesis 2a: Each single-item measure of facet satisfaction will account for significant incremental variance in absenteeism beyond that explained by a multiple-item measure.
Hypothesis 2b: Each single-item measure of facet satisfaction will account for significant incremental variance in intentions to turnover beyond that explained by a multiple-item measure.
Hypothesis 2c: Each single-item measure of facet satisfaction will account for significant incremental variance in self-reported job performance beyond that explained by a multiple-item measure.
Method
Sample
The sample consisted of 207 full-time employees from a varietyof organizations. The sample was obtained primarily from undergraduate students who received credit for recruiting one full-time employee to complete the survey. This approach is similar to the method used by Hazer and Highhouse (1997) to recruit managers, and seems to have yielded data consistent with previous research. For example, the obtained internal consistency indices for the JDI (ranging from .83 to
.90) compare quite favourably with previous research (Johnson, Smith, & Tucker, 1982; Smith et al., 1989). Descriptive statistics for various demographic and job-related variables are presented in Table 1.

A single-item approach to facet satisfaction |
81 |
Table 1. Descriptive information for various demographic and job-related behaviours and intentions variables
|
Mean |
SD |
Minimum |
Maximum |
|
|
|
|
|
Age |
34.07 |
12.16 |
19 |
60 |
Education |
14.75 |
2.19 |
7 |
20 |
Working full-time |
11.80 |
10.60 |
0 |
40 |
Years in organization |
6.62 |
8.12 |
0 |
37 |
Years in position |
3.98 |
5.43 |
0 |
37 |
Absenteeism |
3.88 |
6.42 |
0 |
70 |
Intentions to turnover |
44.41 |
36.75 |
0 |
100 |
Performance |
90.25 |
7.56 |
62 |
100 |
Note: Demographic variables in years. Absenteeism is number of days missed. Intentions to turnover and job performance were measured using a 100-point scale, with 100 being ‘very, very much’ for intentions to turnover, and 100 being ‘extremely well’ for performance.
Measures
Facet satisfaction
Facet satisfaction was measured using the JDI as well as a single-item, discrepancy-based question. The JDI consists of 72 items that comprise five sub-scales or facets: work itself, pay, opportunities for promotion, supervision, and co-workers. The JDI has been used to measure job satisfaction in over 400 studies (Smith et al., 1989) and has documented evidence of convergent and discriminant validity (e.g. Gillet & Schwab, 1975; Johnson et al., 1982; Jung, Dalessio, & Johnson, 1986). In the present study, Cronbach’s a estimates of internal consistency for the five JDI facets ranged from .83 to .90 (see Table 2).
In addition, five single-item questions measuring the same facets as the JDI were developed using a discrepancy-based approach. Because previous research has indicated that discrepancy measures may suffer from a host of psychometric problems (e.g. Gati, 1989; Johns, 1981; Wall & Payne, 1973), each question combined the typical two discrepancy questions (e.g. ‘‘How much do you have?’’ and, ‘‘How much do you wish you had?’’) into one question. Thus, a typical discrepancy question was, ‘‘How does the amount of pay that you currently receive compare to what you think it should be?’’ (rated from not at all satisfying to very satisfying). Estimates of the minimum reliability for the single-item measures were computed using the same correction for attenuation procedure used by Wanous et al. (1997). Because it was argued above (and by Scarpello & Campbell, 1983) that single-item scales may encompass more of a construct than multiple-item scales, a correlation of .90 between single-item and multiple-item scales was used as the true correlation between these scales. As can be seen in Table 2, the minimum reliability estimates for the single-item measures ranged from .52 to .76, with a mean minimum reliability estimate of .63. Ironically, this is exactly the same value that Wanous et al. estimated for single-item global measures of satisfaction.
Job-related intentions/behaviours
Absenteeism, intentions to turnover, and performance were measured using a one-item, selfreport measure. Employees were asked to report their absenteeism by indicating the number of times they were absent during the past year. In a similar fashion, employees were asked to report their intentions to leave an organization (i.e. turnover) by indicating how much they would like to leave the organization within the next 12 months. The scale endpoints for the intention to turnover measure ranged from 1 (not much at all) to 100 (very, very much). Finally, employees were asked to rate how well they had performed their job duties over the previous year, with the scale points ranging from 1 (not at all well) to 100 (extremely well).

82 Mark S. Nagy
Table 2. Intercorrelations between the JDI and the single-item approach measures of facet job satisfaction
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
|
|
|
|
|
|
|
|
|
|
|
|
JDI |
|
|
|
|
|
|
|
|
|
|
|
1. |
Work itself |
(.83) |
|
|
|
|
|
|
|
|
|
2. |
Pay |
.20 |
(.84) |
|
|
|
|
|
|
|
|
3. |
Promotions |
.37 |
.32 |
(.86) |
|
|
|
|
|
|
|
4. |
Supervision |
.35 |
.21 |
.33 |
(.89) |
|
|
|
|
|
|
5. |
Coworkers |
.45 |
.15 |
.31 |
.52 |
(.90) |
|
|
|
|
|
Single-item approach |
|
|
|
|
|
|
|
|
|
|
|
6. |
Work itself |
.65 |
.30 |
.36 |
.39 |
.37 |
(.63) |
|
|
|
|
7. |
Pay |
.27 |
.72 |
.22 |
.15 |
.15 |
.48 |
(.76) |
|
|
|
8. |
Promotions |
.41 |
.28 |
.60 |
.29 |
.28 |
.47 |
.34 |
(.52) |
|
|
9. |
Supervision |
.36 |
.30 |
.35 |
.70 |
.40 |
.47 |
.30 |
.41 |
(.68) |
|
10. |
Coworkers |
.39 |
.12 |
.21 |
.33 |
.64 |
.46 |
.21 |
.38 |
.36 |
(.56) |
Note: All correlations signiŽ cant p<.01. Cronbach’s ain parentheses. Bold indicates correlations of same facets across facet measures. Italics indicate minimum estimate of reliability using the correction for attenuation formula, assuming a correlation of .90 between single-item and multiple-item scales.
Results
Empirical support was found for the first hypothesis as each of the single-item measures of facet satisfaction significantly correlated with the appropriate multipleitem measure of facet satisfaction. For example, the single-item scale measuring the work itself facet was significantly correlated with the work itself facet of the JDI, r=.65, p<.01. The obtained correlations between the single-item scales and the multiple-item measures ranged from .60 (for supervision) to .72 (for pay), with a mean correlation of .66 across all facets (these results are displayed in bold in Table 2). These indices compare quite favourably with Wanous et al.’s (1997) findings regarding single-item overall satisfaction correlations with overall scale measures, which they estimated to be at least .63.
In regards to the second set of hypotheses, there was no support for Hypothesis 2a, which proposed that the single-item approach would account for significant variance in absenteeism beyond that of the JDI for all of the facets. This lack of support might be partially explained by the low amount of overall explained variance in absenteeism by both facet measures. For Hypothesis 2b, the single-item measure accounted for significant incremental variance in intentions towards turnover beyond that explained by the JDI for each of the satisfaction facets: work itself, DF(2,204)=25.47, p<.001; pay, DF(2,203)=4.55, p<.05; opportunities for promotion, DF(2,201)=15.43, p<.001; supervision, DF(2,201)=4.48, p<.05; and co-workers, DF(2,204)=6.52, p<.05. Hence, Hypothesis 2b was fully supported. Finally, the results were mixed for Hypothesis 2c, as the single-item approach predicted significant incremental variance in job performance for two of the satisfaction facets: opportunities for promotion (DF(2,201)=7.27, p<.01), and supervision (DF(2,201)=4.07, p<.05). In addition, the single-item measure approached significance in predicting incremental variance for the work itself facet, (DF(2,204)=3.19, p<.07), and the b weight for the work itself facet was significant at the .05 level.
A single-item approach to facet satisfaction |
83 |
As an additional comparison between the single-item measure and the three jobrelated behaviours, it may be fruitful to examine the bivariate correlations obtained using the single-item measure and compare them with the correlations obtained in the meta-analytic research literature. With the exception of absenteeism, the bivariate correlations between the single-item measure and previous research are quite similar. For instance, the single-item work itself correlation with intentions to turnover (- .55) is nearly twice as large as the - .28 reported in previous research (Steel & Ovalle, 1984). And, although no meta-analyses reported correlations among the other four job facets, the average correlation among the five facets using the single-item measure computed to be - .39 is much higher than the - .25 correlation between overall job satisfaction and intentions to turnover (Steele & Ovalle, 1984). Finally, the obtained correlations between the single-item measures and previous meta-analytic findings are also quite comparable for job performance. For instance, the obtained single-item correlations for the work itself (.16) and promotions (.11) facets are essentially the same as previously reported multiple-item correlations (.175 for work itself and .12 for promotions), whereas the obtained single-item correlations for the pay (.11), supervisor (.26), and co-worker (.15) satisfaction are higher than previous research findings investigating multiple-item measures (.05 for pay, .16 for supervisor, and .10 for co-worker satisfaction; Iaffaldano & Muchinsky, 1985).
Discussion
Overall, the empirical results of this study demonstrate that single-item measures of facet satisfaction compare quite favourably with multiple-item measures of facet satisfaction. The results from the first hypothesis confirmed that the single-item measures of facet satisfaction were significantly correlated with multiple-item measures of the same job facet. Moreover, the obtained correlations were quite similar in magnitude to the correlations Wanous et al. (1997) obtained for single-item measures of global satisfaction. And although not directly part of Hypothesis 1, the mean estimate of the minimum reliability of the single-item facet measures (.63) was the same level that Wanous et al. reported for single-item global scales. Consequently, these findings corroborate Wanous et al.’s findings that single-item measures may be acceptable for measurement purposes.
The results from the second set of hypotheses indicated mixed support. In regard to absenteeism (Hypothesis 2a), the single-item measure approach did not account for significant variance above and beyond that explained by the JDI in any of the facets. This lack of support may have been due to the generally poor relationships between absenteeism and facet satisfaction found in both the JDI and the single-item measures. However, the single-item measures did explain significant incremental variance in intentions to turnover beyond that accounted for by the JDI for all five facets of satisfaction. Thus, Hypothesis 2b was fully supported. Finally, in regard to job performance (Hypothesis 2c), the single-item measure accounted for significant incremental variance for job performance beyond that accounted for by the JDI in two of the five satisfaction facets, with a third facet approaching significance in the expected direction. Taken together, the results from the second set of hypotheses suggest that a single-item measure of facet satisfaction may indeed account for more variance than the JDI for all facets associated with intentions to turnover, for some facets associated with performance, but for no facets associated with absenteeism.
Thus, this investigation has demonstrated that a single-item measure of facet satisfaction compares favourably with multiple-item scales in terms of producing similar
84 Mark S. Nagy
correlations between facet satisfaction and outcome variables such as intentions to turnover and job performance. Moreover, given that single-item measures are shorter than multiple-item scales and that they may encompass more of the facet under scrutiny than multiple-item measures, it stands to reason that the single-item approach should receive strong consideration when choosing a measure of facet satisfaction.
Limitations
Perhaps the most serious limitation of this study is the reliance on self-report data for the satisfaction measures as well as the job-related behaviours and intentions. However, this limitation may be the lesser of two evils. For instance, this study investigated job satisfaction using a heterogeneous sample, which increased the likelihood of generalizing the results. Although the use of a homogeneous sample (such as a group from one organization) may have allowed the collection of job-related behaviour and intention data from other sources besides the self, such a procedure would have required employees to indicate their names on their satisfaction ratings. Such a requirement would have compromised the employee’s confidentiality and, in all likelihood, would have resulted in employee responses being less than authentic. However, a fruitful extension of this research would be to investigate the viability of the hypotheses tested in this study in a setting in which actual behavioural measures of performance, absenteeism and turnover could be collected.
The use of self-report data may also have resulted in common method variance which would have served to inflate the obtained correlations between the measures. This concern is greatest for the first hypothesis, which found that the JDI and the single-item facet scales were significantly correlated. However, correlating scales that are assumed to measure the same construct is essential in establishing construct validity. In addition, the approaches taken in this study mirrored those taken by Wanous et al. (1997). Although there is little precedent in the job satisfaction literature, future research could use different sources (such as peers) to provide ratings of job satisfaction using both the JDI and the single-item measure used in this study.
On the other hand, the presence of common method variance would have not posed a threat for the second set of hypotheses, due to the incremental predictions of the single-item measures. If anything, common method variance would have served to attenuate these incremental predictions. Moreover, a restricted range of responses was observed in all of the job-related behaviour and intention measures (see Table 1), an outcome which would have attenuated any correlations with satisfaction. Under these adverse conditions, it is not surprising that the results involving the job-related hypotheses were mixed.
Finally, as with all single-item measures, no calculations of internal consistency could be computed. The only alternative methods for obtaining reliability data of single-item measures would be through the use of test–retest or equivalent-forms approaches. However, both of these approaches would have required employees to provide their names on the surveys and, as discussed earlier, would have resulted in violated confidentiality and consequently may have damaged the credibility of satisfaction responses. On the other hand, using techniques similar to Wanous et al. (1997), the evidence collected from the first hypothesis suggests that the single-item measure does correlate significantly with the corresponding facets of the JDI. In order to examine the generalizability of the single-item correlations, future research may need to investigate correlations with other job facet measures.
A single-item approach to facet satisfaction |
85 |
Conclusions
This study demonstrated that a single-item, discrepancy-based measure of facet satisfaction compared favourably, from a psychometric standpoint, with a popular multipleitem measure of satisfaction (the JDI) in several areas. First, the single-item facet measure was significantly correlated with a much longer multiple-item measure of facet satisfaction. Second, minimum estimates of reliability for the single-item facet scales compared quite favourably with the minimum estimates that Wanous et al. (1997) obtained for single-item global scales. Third, the single-item measure of facet satisfaction accounted for incremental variance above and beyond that accounted for by the JDI for all facets associated with intentions to turnover and for some facets associated with performance. Moreover, based on several non-psychometric properties, the single-item measure appears to be preferable to multiple-item measures of facet satisfaction in that it is more efficient, is more cost-effective, contains more face validity, and is better able to measure changes in job satisfaction.
Acknowledgements
I would like to thank Stephanie Lovejoy, John Fuller and two anonymous reviewers for their insightful suggestions and critiques of this manuscript.
References
Gati, I. (1989). Person–environment fit research: Problems and prospects. Journal of Vocational Behavior, 35, 181–193.
Gillet, B., & Schwab, D. P. (1975). Convergent and discriminant validities of corresponding Job Descriptive Index and Minnesota Satisfaction Questionnaire scales. Journal of Applied Psychology, 60, 313–317.
Hazer, J. T., & Highhouse, S. (1997). Factors influencing managers’ reactions to utility analysis: Effects of SD-sub(y) method, information frame, and focal intervention. Journal of Applied Psychology, 82, 104–112.
Heneman, H. G., & Schwab, D. P. (1985). Pay satisfaction: Its multidimensional nature and measurement. International Journal of Psychology, 20, 129–141.
Iaffaldano, M. T., & Muchinsky, P. M. (1985). Job satisfaction and job performance: A metaanalysis. Psychological Bulletin, 97, 251–273.
Ironson, G. H., Smith, P. C., Brannick, M. T., Gibson, W. M., & Paul, K. B. (1989). Construction of the job in general scale: A comparison of global, composite, and specific measures. Journal of Applied Psychology, 74, 193–200.
Johns, G. (1981). Difference score measures of organizational behavior variables: A critique.
Organizational Behavior and Human Performance, 27, 443–463.
Johnson, S. M., Smith, P. C., & Tucker, S. M. (1982). Response format of the Job Descriptive Index: Assessment of reliability and validity by the multi-trait, multi-method matrix. Journal of Applied Psychology, 67, 500–505.
Jung, K. G., Dalessio, A., & Johnson, S. M. (1986). Stability of the factor structure of the Job Descriptive Index. Academy of Management Journal, 29, 609–616.
Scarpello, V., & Campbell, J. P. (1983). Job satisfaction: Are all the parts there? Personnel Psychology, 36, 577–600.
Smith, P. C., Balzer, W., Josephson, H. I., Lovell, S. E., Paul, K. B., Reilly, B. A., Reilly, C. E., & Whalen, M. A. (1989). Users’ manual for the Job Descriptive Index (JDI) and the Job in General (JIG) scales. Bowling Green, Ohio: Bowling Green Scale University.
86 Mark S. Nagy
Steel, R. P., & Ovalle, N. K. (1984). A review and meta-analysis of research on the relationship between behavioral intentions and employee turnover. Journal of Applied Psychology, 69, 673–686.
Wall, T. D., & Payne, R. (1973). Are deficiency scores deficient? Journal of Applied Psychology, 58, 322–326.
Wanous, J. P., Reichers, A. E., & Hudy, M. J. (1997). Overall job satisfaction: How good are single-item measures? Journal of Applied Psychology, 82, 247–252.
Received 24 May 1999; revised version received 15 May 2001