Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Schuman S. - The IAF Handbook of Group Facilitation (2005)(en)

.pdf
Скачиваний:
196
Добавлен:
28.10.2013
Размер:
6.18 Mб
Скачать

Visual dialogue tools: Pictures and images or templates produced to support visual communication and facilitation in group processes.

Visual facilitation: Facilitation using visual tools in interactive, highly facilitated approaches. Also referred to as graphic facilitation.

Visual practitioning: Using visual tools and methods as a practitioner (www. visualpractitioner.org).

Visualization: The forming of an image in the mind of something that is not actually existent, of the abstract. The formulation and communication in images; here, making visible and adapting thoughts, content, and processes. The formulation of something that cannot be said or written.

Visualizer: The visualizing person.

Visual repatterning: Reordering intrapersonal images to patterns by the means of outer images.

Visual language: “1. the integration of words, images, and shapes into a single unit. 2. the use of words and images or words and shapes to form a single communication unit” (Horn, 1998, 8). Horn comprehends images as “visible shapes that resemble objects in the perceivable world,” and shapes as “abstract forms that . . . do not resemble objects in the natural world” like lines, points, arrows” (Horn, 1998, p. 72).

RESOURCES

Bion, W. R. Learning from Experience. London: Heinemann, 1962. Bion, W. R. Elements of Psychoanalysis. London: Heinemann, 1963. Blake, A.G.E. Intelligence Now. Gloucestershire, U.K.: Blake, 1973.

Cowan, J. A Mapmaker’s Dream. London: Shambala, 1996.

Lundin, S. C. Fish! A Remarkable Way to Boost Morale and Improve Results. New York: Hyperion, 2000.

Sibbet, D. Best Practices for Facilitation. San Francisco: Grove Consultants, 2002a.

Sibbet, D. The Purpose and Potential of Leading Group Process. San Francisco: Grove Consultants, 2002b.

Sueddeutsche Zeitung. “Deutsche Unternehmen.” May 2003. [http://www.sueddeutsche.de/ wirtschaft/artikel/302/11291/].

Whyte, D. The Heart Aroused. New York: Doubleday, 1996.

Wilson, F. R. The Hand. How Its Use Shapes the Brain, Language and Human Culture. New York: Vintage, 1999.

Quality Without a Name

419

Facilitating Participatory

Evaluation Through

Stories of Change

c h a p t e r

T W E N T Y-

F O U R

Terry D. Bergdall

Increasingly, facilitators are being asked to play an enabling role in participatory evaluations, that is, the review and assessment of organizational activities by people who are involved in their implementation. At least two major factors contribute to this. First, there is growing awareness of the importance of organizational learning. Whereas evaluation was once primarily seen as a management tool to inform top decision makers, there is now greater recognition of the relationship between active participation of key stakeholders and organizational effectiveness. Many people today see participatory evaluation as a practical mechanism for expanding learning opportunities among those directly involved in and affected by particular activities.

Second, organizations that invest in facilitated processes are increasingly interested in learning more about the ultimate impact of such processes. This can be a major challenge because many of the anticipated benefits associated with facilitation— such as team effectiveness, group initiative and responsibility, capacity for problem solving, transparency in decision making, creativity, and self-confidence—are intangible and are difficult to quantify. Participatory evaluation is not only consistent

421

with the underlying values and principles of facilitation, it can offer an effective means for understanding organizational and social change, including the role of facilitation itself, in ways that more conventional evaluation methods cannot.

This chapter provides facilitators with an overview of participatory evaluations and is divided into two parts. It first introduces and then focuses on a particular approach to participatory evaluation based on stories of change. This approach systematically engages stakeholders in a dialogue about outcomes. Although it is illustrated through experiences gained in international development projects, suggestions are made for how it might be applied in connection with any facilitated process.

AN OVERVIEW OF PARTICIPATORY EVALUATION

Professional Program Evaluation

New initiatives, interventions, and innovations—programs in the broadest sense of the word—are planned and implemented in most sectors of organized life: education, social services, public health, private business, community and organizational development, and many others. When time and money are invested in planned activities, it is reasonable to expect that people want to know the results. Over the years, a profession of program evaluation has evolved with established values, norms, and practices for accomplishing this (Rossi, Freeman, and Lipsey, 2003; Wholey, Hatry, and Newcomer, 1994).

Participatory evaluation is a relatively new feature that has emerged within this larger environment. Before venturing forth to do participatory evaluation, it is advisable for new facilitators to gain some basic familiarity with the professional turf that they are about to enter. Terms and concepts are a good place to start. Summative evaluations provide information to decision makers about the impact and worth of particular programs. They typically occur at the end of a program cycle when curtailment, extension, or expansion is being considered. The primary clients for these evaluations tend to be policymakers or funders. Formative evaluations support an ongoing program. They are normally commissioned by, and delivered to, people who have the power to make improvements—often program managers. They are primarily concerned with furthering a program’s effectiveness.

In both types of evaluation, professional evaluators usually rely on comparisons as the basis for drawing conclusions. This is usually seen as the foundation of all program evaluation. One approach is to compare actual program outcomes to

422

The IAF Handbook of Group Facilitation

planned targets. For example, a program might have planned to deliver 20 workshops to 400 attendees but in actuality delivered 15 workshops to 225 attendees. Another approach is to compare key outcome measures in two different locations: one that received the program and one that did not. An example is comparing the number of drunk driving violations in two communities: one in which teens participated in drunk-driver workshops and one in which there were no such workshops. A third approach is to compare costs and benefits—for example, comparing the amount of money spent on training employees in new procedures with the savings realized afterward by the use of these procedures. The purpose of these comparisons is to enable reasoned analysis about the results and effectiveness of program activities.

Many professional evaluators have stressed the importance of producing findings that have the greatest possible practical use. They ask, “Who will be the primary consumers of the evaluation findings, and what information do they need about the program? What are the particular aims and objectives of these users?” This approach, known as utilization evaluation, suggests that since different users have different needs, the design of any evaluation should be tailored to meet the needs of specific users (Patton, 1997). The range of potential users includes public policymakers, private donors, program managers, project staff, and other stakeholders. The purposes of evaluation are as varied as the users. Besides the broad categories of summative and formative evaluation, the evaluation objectives include impact assessment, project management and planning, and public accountability. Because there is no one simple approach to evaluation that can satisfy all users and all purposes, utilization evaluators argue for a wide range of methodologies that can be selectively applied to diverse situations.

Debates about objectivity and subjectivity within evaluations have an intense history among professional evaluators. The difference between objectivity and subjectivity was primarily seen as a distinction between the use of quantitative and qualitative methodologies. Although this debate still exists, it has largely subsided. Most evaluators today would agree that both qualitative and quantitative methods are valuable and that most evaluations benefit from a mix of the two. Still, experienced evaluators recognize the importance of clearly acknowledging points of vulnerability where subjectivity, or the appearance of subjectivity, might inadvertently creep into an evaluation and take measures to counterbalance it. Such vulnerabilities, and prescribed actions for addressing them, are then described in the methodology section of an evaluation report.

Facilitating Participatory Evaluation Through Stories of Change

423

Although the particular approaches of program evaluation vary infinitely depending on the needs of particular users, almost all follow a basic generic format. The following five steps provide an overview for the evaluation process:

1.Define the agenda and identify the key evaluation questions to be answered.

2.Design a plan, identify data sources, select methods, and set a schedule.

3.Collect relevant information according to the evaluation design.

4.Analyze and interpret data to draw conclusions and make recommendations.

5.Disseminate findings.

Program evaluation is typically viewed as an activity to be planned and carried out by external professionals brought in from outside the program. Monitoring is often seen as an internal formative process that is usually carried out by program staff to provide information for ongoing internal assessments. A recent development has been to involve program participants, or beneficiaries, along with other interested stakeholders, in the monitoring process. Participatory monitoring and evaluation (PM&E) is the term that includes this monitoring function (Estrella and Gaventa, 1998). The acronym PM&E is also often used as a shorthand synonym for the more general term participatory evaluation.

Participatory Monitoring and Evaluation

Participatory monitoring and evaluation refers to stakeholder involvement in assessing the effectiveness and efficiency of activities planned and carried out by organizations or groups to realize particular goals. Stakeholder involvement usually builds upward from those who are most directly involved in a program or innovation, for example, primary implementers and direct beneficiaries, to others who were less directly involved in the ongoing work being evaluated, for example, personnel who sit at different levels of an implementing agency far away from the actual delivery of program activities.

There is a wide spectrum of approaches for doing PM&E. Variations largely depend on different aims and objectives. A distinction made by some observers divides PM&E into two basic types of evaluation: practical and transformative (Cousins and Whitmore, 1998).

Practical participatory evaluation is based on a utilization perspective that stakeholder participation is the best way to enhance evaluation relevance, ownership,

424

The IAF Handbook of Group Facilitation

and stakeholder adoption (Patton, 1997). Four outcomes are usually anticipated as a result of practical PM&E activities:

An increase in stakeholder understanding of the program and insights about its effective implementation

Enhanced capacity of stakeholders to engage further in future evaluation processes and other program activities

A greater sense of ownership of the evaluation findings and therefore an increased likelihood that stakeholders will be committed to act on recommendations

Accountability to other stakeholders by providing information about the degree to which project objectives have been met and resources used

Professional evaluators in most cases will carry out the technical aspects of such evaluations. These tasks include overall responsibility for designing the process, selecting methods, facilitating data collection, overseeing the analysis of data, and writing and presenting the final report. Potential roles for stakeholders might include defining the evaluation topic and interpreting data. Also, evaluators may train stakeholders and involve them directly in the collection of data.

Participatory evaluations are undertaken in all sectors of society, including private business, and they are extensively used in social programs. Based on needs assessments, most social programs are designed to deliver basic services to targeted beneficiaries. Traditionally, staff and management are seen as the primary actors in these programs, while target groups are more or less objects of an intervention. Practical participatory evaluation provides a tool for allowing recipients to become more connected to the program through their involvement in the evaluation process. However, decisions about the degree of beneficiary involvement in the evaluation usually remain in the hands of program management.

Transformative participatory evaluation is a more ideologically driven approach and is fundamentally concerned with issues of control and power. Its intellectual roots include action research originating primarily, but not exclusively, in the developing world. Its aims are far different from those of practical PM&E. Rather than merely improving service delivery within a program, transformative PM&E employs participatory principles for the sake of democratizing social change. It attempts to turn relationships upside down. In transformative PM&E, the beneficiaries—those who are often seen as the “objects” of an intervention—are

Facilitating Participatory Evaluation Through Stories of Change

425

the primary stakeholders and actors responsible for the evaluation process. Although they may be dependent on professional evaluators and facilitators for training in the initial phases, it is envisioned that as they become more familiar with and sophisticated in the process, they will control all aspects of evaluation, including the generation, ownership, and dissemination of resulting knowledge. This also includes decision making regarding project change and implementation of new strategies. Indeed, it is sometimes difficult within this perspective to distinguish evaluation activities from other ongoing development work.

Rhetoric about transformative PM&E often exceeds its practice. Rather than an either-or relationship, it is perhaps more helpful to see practical and transformative aspects of PM&E on a continuum of aims and objectives. An example is a particular approach called empowerment evaluation (Fetterman, Kaftarian, and Wandersman, 1996) that clearly envisions PM&E in transformative terms. It begins by building on utilitarian aims—increased understanding, sense of ownership, and enhanced capacity among stakeholders—and then moves toward developing the abilities of participants to continue assessment on their own. This is an important step toward realizing, as Fetterman calls it, “participant liberation,” that is, release from preexisting roles and constraints. In addition to facilitating the PM&E process, Fetterman encourages evaluators to play a direct advocacy role with policymakers and donors on behalf of a program and its stakeholders. Liberation and advocacy are transformative aims that clearly move well beyond the utilitarian purposes of most practical evaluation.

The degree to which professional facilitators are involved in the practical or transformative facets of participatory evaluation will largely depend on their personal values, priorities, and opportunities. However, these distinctions between practical and transformative facets are important because they help facilitators become more self-conscious as they make decisions about the overall design and underlying purpose of their PM&E activities.

Resources and Methods for PM&E

If determining the aims of a participatory evaluation is the first step in the process, selecting appropriate methods is the second. Many traditional evaluation methods, such as surveys, questionnaires, group interviews, participant observation, and document reviews, are also used in participatory evaluation (Wholey, Hatry, and Newcomer, 1994; Herman, Morris, and Fitz-Gibbon, 1987). The methods themselves are rather neutral in regard to PM&E. Their “participatory” nature largely

426

The IAF Handbook of Group Facilitation

depends on the degree of stakeholder involvement in their selection, application, and analysis.

There are, however, a number of methods that have been specifically created for use in participatory evaluations. A Venn diagram is one and therefore provides a good example of a PM&E method (Donnelly, 1997). Venn diagrams use overlapping circles to analyze relationships within institutions or between organizations and stakeholders. They can show, for example, different participant perceptions about accessibility or restrictions to resources. Circles of various sizes are cut out of paper and given to participants, who are then asked to use the circles to represent different institutions, groups, or departments, with the size of the circle indicating its perceived importance. By overlapping them or placing them at far distances from each other, they show the degree of contact and interaction between institutions or groups. Reflective discussions with the group after the exercise can then help generate additional qualitative data that are documented for the evaluation.

A Venn diagram is only one of a countless number of methods that are appropriate for conducting participatory evaluations and is mentioned here only as an example. Facilitators can draw on numerous others. An especially valuable resource that describes a large number of unusually innovative methods is Creative Evaluation (Patton, 1987). (Though out of print, it is worth a trip to the library or a search on the Internet.) It has an abundance of ideas that will help facilitators design evaluation processes and help stimulate other fresh ideas. Participatory rapid appraisal methods are often used in evaluations (Robinson, 2002). The International Institute of Environment and Development (IIED) has published a series of manuals and workbooks on participatory assessment tools (originally the PLA Notes series and now Participatory Learning and Action series) since 1988. Although these are primarily intended for use in the developing world, they illustrate a large variety of methods that might be adapted to other contexts. The Kellogg Foundation (1998) and UNDP (Donnelly, 1997) have also published valuable resources describing various methods that are especially appropriate for participatory evaluations.

PARTICIPATORY EVALUATION THROUGH STORIES OF CHANGE

The particular approach to PM&E explored here, as applied within different community development programs, consists of a system that has four basic aims and is built around five design features. Central to this system is the identification of

Facilitating Participatory Evaluation Through Stories of Change

427

stories of change originating with local stakeholders. Exhibit 24.1 provides an example of such stories. Before jumping directly to the stories, however, brief descriptions about the underlying intentions of the programs involved and their relationship to social change set the foundation for the broader context in which these evaluations took place.

Exhibit 24.1

School Renovation in Ovsiste: An Example of Stories of Change

Ovsiste is a small farming village in the rural municipality of Topola in central Serbia located about 100 kilometers south of Belgrade. According to the municipality census figures of 2002, its population consisted of 628 people and 268 households. Thirty-two children between the ages of six and twelve were registered at the local elementary school in 2002. Based on priorities determined by the community at a local planning meeting in November 2002, a decision was made to renovate the school building. All three classrooms and the administrative office were renovated over a two-month period, with completion occurring in March 2003. This involved floor repair (133 square meters), repair and refinishing of walls (470 square meters), installation of new wall paneling (135 square meters), hanging of five new doors, installation of new rain gutters (125 meters), and renovation of the school’s entire electrical system. The total cost was 12,130; 1,180 was raised in cash by the community, 5,450 came from the municipality, and 5,500 came from

Topola Rural Development Program (TRDP). The community local action group selected the four most significant changes as a result of this project.

The following stories were discussed and agreed on by the community residents at a quarterly review meeting in March 2003 (Opto International, AB 2003). Effort was made to maintain the original voice of community members.

For the first time in ten years, Ovsiste has successfully completed a development project. Several different projects had been started in during the past ten years, but not one of them was ever completed: repair to the water system, renovation of the access road, maintenance work on the health clinic, repairs to the church. Money was even collected from residents for doing many of these things, but still every one of them ended in failure. This time we successfully organized all of the work and properly managed the donations so that the school renovation could be completed. We started and we finished! If someone can’t visit our school to see it for themselves, then photographs of the “completion ceremony” are proof of our success.

428

The IAF Handbook of Group Facilitation