
Rothwell W.J. - Beyond Training and Development[c] The Groundbreaking Classic on Human Performance Enhancement (2004)(2-e)(en)
.pdf
274 |
EVALUATING RESULTS |
Exhibit 14-1. A sample participant evaluation.
Directions: Complete the following evaluation at the end of the training session. Circle the number at the right that most closely approximates your feelings about the statement in the left column. Use the following scale:
5 |
Strongly Agree |
|
4 |
Agree |
|
3 |
Neither Agree nor Disagree |
|
2 |
|
Disagree |
1 |
|
Strongly Disagree |
There are no right or wrong answers in any absolute sense. Mark your response quickly, since your first reaction is most likely to reflect your genuine feelings.
|
|
|
|
Neither |
|
|
|
|
Strongly |
|
Agree nor |
|
Strongly |
|
|
Agree |
Agree |
Disagree |
Disagree |
Disagree |
|
|
|
|
|
|
|
1. |
This training course had a clearly |
5 |
4 |
3 |
2 |
1 |
|
defined purpose. |
|
|
|
|
|
2. |
This training course had clearly |
5 |
4 |
3 |
2 |
1 |
|
defined objectives. |
|
|
|
|
|
3. |
The structure of this training |
5 |
4 |
3 |
2 |
1 |
|
course was clear from the outset. |
|
|
|
|
|
4. |
This training course was clearly |
5 |
4 |
3 |
2 |
1 |
|
related to my job. |
|
|
|
|
|
5. |
I feel that I learned much in this |
5 |
4 |
3 |
2 |
1 |
|
training course. |
|
|
|
|
|
6. |
I will apply what I learned back on |
5 |
4 |
3 |
2 |
1 |
|
my job. |
|
|
|
|
|
7. |
I am confident that my coworkers |
5 |
4 |
3 |
2 |
1 |
|
will support the on-the-job appli- |
|
|
|
|
|
|
cation of what I learned in this |
|
|
|
|
|
|
training course. |
|
|
|
|
|
8. |
I am confident that my supervisor |
5 |
4 |
3 |
2 |
1 |
|
will support the on-the-job appli- |
|
|
|
|
|
|
cation of what I learned in this |
|
|
|
|
|
|
training course. |
|
|
|
|
|
9.What were the chief benefits of this training course?
10.What areas need improvement in this training course?
11.If I were asked to prove how this training would improve my job performance in measurable ways, I would suggest:

Evaluating Human Performance Enhancement Strategies |
275 |
tomers, suppliers, and distributors. Training and development professionals can also measure on-the-job behavioral change through unobtrusive measures such as examinations of participants’ performance appraisals or of work results.
Evaluating training by measuring participant on-the-job behavior change has the advantage of establishing a basis of accountability for the participants. It may show that changes begun in training (because of new learning) have carried over to the job. However, establishing a definitive correlation between training and job behavior change has long been problematic, since many variables besides training influence how individuals behave on their jobs. The amount of support they receive from coworkers and immediate organizational superiors may significantly influence changes begun in training. Unfortunately, well-designed training may end up yielding no change on the job because conditions in any of the four performance areas do not support that change. On the other hand, illdesigned training may yield significant change if working conditions support it.
The fourth and highest level of Kirkpatrick’s hierarchy is organizational results. It addresses this key question: How much did training affect the organization? Measuring the organizational results of training means determining the financial returns on training investments. How well did the training translate into a favorable ratio of outputs to inputs? What measurable gains in the bottom line were realized by the organization as a result of the training?
Assessing the organizational results of training has traditionally been the least commonly used method of training evaluation. One reason is that it is usually time-consuming and expensive to do. A second reason is that there are no foolproof approaches—although many training and development professionals continue to seek a quick-and-dirty (and bulletproof ) approach to it. A third reason is that, even when evidence of organizational results from training can be offered, it may not be convincing to skeptical decision makers and stakeholders; there is an important difference between accumulating evidence and rendering unquestionable proof. Decision makers may disbelieve the evidence submitted, particularly when it comes from self-interested training and development professionals who conceived and evaluated the training effort.
Approaches to measuring organizational results vary. One approach is to specify, before training is conducted, exactly what measurable on-the-job and organizational results are sought. Key decision makers should have a part in such an effort, which may be undertaken by a project team. Of particular importance are the measurable instructional objectives of the training effort, since they provide a clear statement of the desired results. These can be enhanced to include

276 |
EVALUATING RESULTS |
desired and measurable organizational changes that should eventually result from the training.
Evaluating training by measuring organizational results offers the advantage of establishing a basis of accountability for the organization. It can also reveal where future investments may have a significant payoff. However, these advantages should be weighed against the time, money, and effort involved in measuring the return. That does not come inexpensively or effortlessly.
The Four Levels of HPE Strategy Evaluation
Kirkpatrick’s evaluation model lends itself to HPE strategy evaluation with only minor modifications in keeping with a new focus on performance enhancement and on change. The refitted levels might be called Rothwell’s four levels for evaluating HPE strategy.
Level 1: Worker satisfaction with the HPE strategy
Level 2: Work results of the HPE strategy
Level 3: Work environment results of the HPE strategy
Level 4: Organizational results of the HPE strategy
These levels are tied to the four concentric circles that make up the four performance areas discussed throughout this book (see Exhibit 14-2). They tie evaluation to the intended results of the HPE strategy and to the four levels of performance.
Level 1 focuses on worker satisfaction with the HPE strategy. Like the Kirkpatrick model, it addresses this question: How well do the participants like the change strategy? Since HPE can use many methods, the question can refer to one or more strategies, including organized efforts to improve feedback, rewards and incentives, selection policies, organizational policies and procedures, job aids, and employee training efforts. As in collecting information about participant satisfaction following training, Level 1 measurement methods rely on satisfaction questionnaires, focus groups, or other methods that have been welldeveloped in measuring customer satisfaction.
The disadvantage of focusing on worker satisfaction is that, as with training evaluations, workers may ‘‘like’’ or ‘‘dislike’’ HPE strategies for the wrong reasons. Since the aim of an HPE strategy is to enhance human performance, any criterion for assessing satisfaction other than its impact on performance is usually inappropriate. The best approach is to confine questions about worker satisfac-

Evaluating Human Performance Enhancement Strategies |
277 |
Exhibit 14-2. Levels of HPE strategy evaluation.
Level 4:
Organizational Results of the HPE Strategy
Level 3:
Work Environment Results of the HPE Strategy
Level 2:
Work Results of the HPE Strategy
Level 1:
Worker Satisfaction with the HPE Strategy
tion to perceptions about how well an HPE strategy contributed to human performance enhancement.
Level 2 focuses on work results. Like Kirkpatrick’s third level, it directs attention to this question: How well did the HPE strategy achieve measurable performance improvement at the work level? As in Level 1, Level 2 involves measuring more than one category of HPE strategy. If more than one HPE strategy is being evaluated, it will usually be necessary to aggregate the results. To complicate matters, many variables may influence HPE strategies at the work level. Probably the best that can be hoped for is to achieve a ‘‘best guess’’ approximation of productivity improvements resulting from the HPE strategy at the work level.
Level 3 evaluation focuses on work environment results of the HPE strategy. This level is akin to Kirkpatrick’s fourth level. It directs attention to this ques-

278 |
EVALUATING RESULTS |
tion: How well did the HPE strategy achieve measurable performance improvement for the organization? The aim is to calculate a return on the overall investment in the HPE strategy, even if the strategy involved using multiple change levers, such as job aids, selection improvement efforts, feedback improvement efforts, training, or reward or incentive improvement efforts. Another aim is to assess how much the HPE strategy helped the organization implement its corporate strategy and thus achieve organizational strategic goals.
Level 4 evaluation focuses on organizational environment results of the HPE strategy. This level has no counterpart in Kirkpatrick’s hierarchy. It directs attention to this question: How well did the HPE strategy achieve measurable performance improvement at the competitive level? The aim is to calculate how much the HPE strategy helped the organization improve customer service, achieve a competitive edge, anticipate external environmental change, and beat competitors to the punch. This level is immensely difficult to quantify, but it can be evaluated through success stories or other means.10
How Do HPE Strategy Evaluation Methods Differ from Training Evaluation Methods?
As might be gleaned from the preceding section, a key difference exists between HPE strategy evaluation and training evaluation. Training evaluation, as described by Kirkpatrick’s model, focuses on planned learning and its impact on participants, job behaviors, and organizational results. HPE strategy evaluation, as described by my model, focuses on planned HPE and change. My model is thus inherently directed to measuring bottom-line results as well as strategic impact.
What Step-by-Step Models Can Guide HPE
Evaluation Strategy?
Models are useful, and they can also be fun. They help conceptualize what to do and how to do it. A step-by-step model to guide HPE evaluation strategy may be helpful to HPE specialists faced with conducting HPE evaluation.
A key point should be emphasized, however: It is advisable to establish performance goals to guide an HPE strategy before it is implemented. Decisions about implementing or forgoing an HPE strategy (or combination of strategies) should be made before action is taken.

Evaluating Human Performance Enhancement Strategies |
279 |
There are several reasons for this advice. Such a practice is economical, focusing the organization’s resources on those areas in which the greatest gains are likely to be made. In addition, specifying desired results in advance establishes accountability for HPE specialists and builds ownership between key stakeholders and decision makers in achieving results. Ownership is more difficult to create after training is conducted, selection methods have been changed, or any other HPE strategy has already been undertaken.
That is not to say that it is impossible to conduct concurrent or after-the- fact evaluation. What follows are three different models for evaluating HPE strategy. The first model should be used before an HPE strategy is implemented, the second model should be used during implementation, and the third model should be used when the HPE strategy has been in place long enough to judge outcomes.
Model 1: Forecasting the Results of HPE Strategy
Forecasting the results of a HPE strategy is done at the time a strategy or combination of strategies is selected and before the strategy is implemented (see Exhibit 14-3).
If the HPE strategy is undertaken to solve a human performance problem,
Exhibit 14-3. A model for forecasting the results of HPE strategy.
What is the performance problem |
|
What will it cost to solve |
||
costing the organization or what |
|
|||
|
|
the problem or pursue |
||
financial gains would result from |
|
|
||
|
the performance improvement |
|||
pursuing a performance |
|
|||
|
|
opportunity? |
||
improvement opportunity? |
|
|
||
|
|
(costs) |
||
(benefits) |
|
|
||
|
|
|
||
|
|
|
|
|
|
|
|
|
|
What are the benefits minus the costs?
Is the remainder positive?
(If not, reject the HPE; if so, compare to other possible HPE strategies that might be pursued to determine what project will yield greatest return.)

280 |
EVALUATING RESULTS |
estimate what that problem is costing the organization. Base the estimate on the consequences of the problem, such as lost business, lost production, or scrap. If that is not clear, ask decision makers how they know that a problem exists. Their answer will shed light on what to measure. Then estimate what it will cost the organization to solve the problem. Include costs associated with clarifying the problem, identifying possible HPE strategies, and implementing the HPE strategies. Compare the costs of solving the problem to the expected benefits. Take action only if the cost-benefit ratio shows that estimated benefits will outweigh estimated costs. To identify the costs and benefits associated with the HPE strategy, interview line managers, employees, customers, distributors, suppliers, or other key groups that may have the necessary information. Use an interview guide like the one in Exhibit 14-4 as a starting point to surface the costs and benefits associated with the HPE strategy.
Exhibit 14-4. An interview guide for surfacing the costs and benefits of HPE strategy.
Directions: Select individuals inside and outside the organization who are familiar with the human performance problem to be solved or the human performance enhancement opportunity to be pursued. Pose the following questions to them.
1.What is the problem to be solved, or what is the opportunity to be pursued? (Describe it.)
2.What is the problem costing the organization, or what benefits could be realized by pursuing a human performance enhancement intervention? Indicate how it can be measured in financial terms. What information is available about the actual costs or benefits of the problem or opportunity? Where was that information obtained, and how reliable is it?
3.What will it cost to solve the problem or pursue the human performance enhancement opportunity? (You may wish to suggest some possible ways to solve the problem or pursue the opportunity. Then estimate the costs for analyzing the problem/opportunity, implementing an HPE strategy, and evaluating results.)
4.What is the estimated difference between benefits (item 2) and costs (item 3)? Subtract item 3 from item 2.
5.Is the remainder expressed in item 4 negative? If so, reject the project. If it is positive, consider the project. However, compare the expected return from this project to other possible projects that the organization may be considering. Prioritize them on the basis of expected rate or return.

Evaluating Human Performance Enhancement Strategies |
281 |
If the HPE strategy is undertaken to seize a future opportunity, estimate the likely benefits to be realized from the opportunity and compare them to the costs of implementing the HPE strategy. If the gains are uncertain, as they often are in a new undertaking, then use the best available estimates. Implement the HPE strategy only if it is expected to yield a favorable cost-benefit ratio. Try to estimate what the gains will be. Various accounting methods devised to determine the rate of return on a project (such as internal rate of return) can be applied to this problem with the assistance of a qualified accountant.
An alternative strategy is to form a task force of stakeholders and ask them to identify measurable objectives to be achieved by the end of the HPE strategy. Those objectives should be expressed as increased units of production, measurable improvements on customer satisfaction surveys, or other results acceptable to task force members. Use the interview guide in Exhibit 14-4 to clarify measurable results with the members of the task force before the HPE strategy is undertaken. Then communicate those results beyond the task force so that other decision makers have the opportunity to comment and accept ownership in the measures.
Model 2: Conducting Concurrent Evaluation of HPE Strategy
Think of concurrent evaluation as a continuous improvement effort. Use the model in Exhibit 14-5 to guide the evaluation process as the HPE strategy is implemented.
As a first step, establish methods to track measurable financial results as they are realized. If possible, establish milestones—that is, interim points during the HPE strategy implementation to measure progress toward an ultimate goal of financial savings and gains. A milestone, for instance, might be on-the-job cost savings or productivity gains realized following the delivery of three of five training sessions or six months after a new reward program has been implemented. Nonfinancial measures, such as improvements in customer satisfaction ratings during HPE strategy implementation, can also be used. Specify in the milestones how the results will be measured and exactly what the results should be. The desired results should have been established during the selection of the HPE strategy and expressed as measurable results to be achieved. This process is akin to establishing organizational strategic objectives and tracking progress against them.
Ask stakeholders to help establish the milestones, the desirable points in time at which to measure results, and the measurement methods to use. If possible,

282 |
EVALUATING RESULTS |
Exhibit 14-5. A model for conducting concurrent evaluation of HPE strategy.
Establish measures to track progress, methods to track progress, and
milestones (interim measures) to be achieved during HPE implementation.
Establish a project team to measure results at the milestone times and
compare expected to actual results.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Advertise successful |
|
Make midcourse corrections |
|
||||
|
|
as necessary if project results |
|
|||||
|
achievement of milestone |
|
|
|||||
|
|
are not being achieved |
|
|||||
|
|
measures. |
|
|
||||
|
|
|
at the milestones. |
|
||||
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
form a standing task force or review committee to receive progress reports during implementation. Use the task force as a quality improvement team to help make program corrections when the results of the HPE strategy do not match expectations.
Model 3: Evaluating Outcomes of HPE Strategy
Evaluate outcomes by comparing the measurable objectives established before implementation to the results achieved after the strategy has been in place for a reasonable time. It is difficult to establish a definitive ‘‘end’’ for an HPE strategy, since it may have a long duration. Use the model depicted in Exhibit 14-6 to guide the evaluation process.
If the HPE strategy is intended to be semipermanent, as may be the case with a safety training program, a job aid, or a salary bonus program, then a summative evaluation may be necessary only when a major change affects the strategic goals and objectives of the organization. In practical terms, an HPE strategy may not be evaluated for final outcomes for several years. (However, concurrent evaluation should continue even when final outcomes are not evaluated.) Decision makers may request final outcome evaluations of training, but they only rarely make such requests for other HPE strategies.

Evaluating Human Performance Enhancement Strategies |
283 |
Exhibit 14-6. A model for evaluating the outcomes of HPE strategy.
|
What measurable results were |
|
|
What measurable results were |
|
|||
|
desired from the HPE strategy |
|
|
realized from the HPE strategy |
|
|||
|
(established before the strategy |
|
|
at the end (after the strategy |
|
|||
|
was implemented)? |
|
|
was implemented)? |
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
How closely did estimated |
|
|
|
|
|
|
|
|
results compare to |
|
|
|
|
|
|
|
|
actual results? |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If the comparison |
|
If the comparison is unfavorable,, |
|
|
|||
|
is favorable, consider |
|
|
consider positive impact |
|
|
||
|
the HPE strategy a |
|
|
of HPE strategy and |
|
|
||
|
success. Advertise results. |
|
|
analyze what happened. |
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Evaluating the outcomes of HPE strategy should be a straightforward process, provided that measurable objectives were established before implementation and the results were tracked during implementation. If objectives were not established before implementation, which is too often the case, it will be necessary to clarify afterward what measurable results were achieved. Then evidence of results should be solicited from participants, such as those who attended training sessions, users of jobs aids, stakeholders in reward and incentive systems, or stakeholders of a selection effort. One approach is to solicit ‘‘success’’ and ‘‘failure’’ stories from participants/users about the HPE strategy and its results. The stories may suggest appropriate measures to apply to the HPE strategy. They may also provide evidence of measurable results. Exhibit 14-7 provides a questionnaire designed to solicit such stories.
What Research Has Been Done on Evaluating HPE, and What Has It Shown?
Much has been written about evaluating HPE interventions,11 but little databased research is publicly available about it. (That is not to say that there is none. The best research is available only to those who work with leading consulting