- •Using Intellectual Standards to Assess Student Reasoning
- •Understanding Authentic Classroom-Based Literacy Assessment "The goal of assessment has to be, above all, to support the improvement of learning and teaching." (Frederikson & Collins, 1989)
- •Introduction
- •Introduction
- •The Changing Picture of Assessment
- •What Is Authentic Assessment?
- •Why Is It Important to Align Instruction and Assessment?
- •Why Does Assessment Need to Be Ongoing?
- •What Are the Different Forms of Authentic Assessment?
- •Why Is Student Self-Assessment Important?
- •Authentic Classroom Assessment in Action: Ms. Rodriguez's Classroom
- •How Can Teachers Become More Effective and Efficient at Classroom-Based Assessment?
- •Final Thoughts
Authentic assessment involves students in tasks that are derived from and simulate "real life" (or authentic) conditions or situations. The aim of performance-based or authentic assessment is to provide valid and accurate information about what students really know and are able to do, or about the quality of educational programs. The collective assessments do not reduce or distort the nature of knowledge or the nature of learning in the information gathering process. Assessing how well knowledge and skills have been learned means requiring their use in a meaningful real-life context. |
Integration of Disciplines: Authentic Assessment Oregon State University
Authentic Assessment - Manitoba Education and Training
Authentic Assessment in the Classroom Martin Tombari, University of Denver Gary Borich, University of Texas, Austin
Internet Resources on Student Assessment
Authentic Assessment of Teaching in Context by Linda Darling-Hammond--Teachers College-Columbia University Jon D. Snyder--Stanford University
Using Intellectual Standards to Assess Student Reasoning 25 October 1999
Understanding Authentic Classroom-Based Literacy Assessment 26 July 1999
Lesley Cooley-Carmichael's Resource Home (Authentic Assessment)
University of Missouri-Columbia Center for Learning, Evaluation, and Assessment Research
ERIC Clearinghouse on Assessment and Evaluation
|
Dr. Robert G. Berns, Project Director Dr. Patricia M. Erickson, Co-Principal Investigator
|
U.S. Department of Education Contract ED-98-CO-0086 Revised |
Integration of the Disciplines Authentic Assessment
| Techniques | Criteria |
Assessing Student Performance and Understanding
I
n
order to determine whether your integrated/contextual learning
environment and activities increase what a student knows, can do, and
knowledge about the processes to solve real world problems, it is
necessary to design and use assessment strategies and tools. This
zone will describe a few assessment techniques and you will be given
the opportunity to choose one or more of these, or select different
assessment techniques, for use with your integrated/contextual
lesson.
Integrated/Contextual learning requires a teacher to move beyond traditional forms of assessment. Teachers of integrated/contextual instruction need to use open-ended, complex challenges that enable learners to demonstrate ways in which they construct their own meaning for content and concepts, and solve various real world problems.
There is no right or wrong assessment strategy, there are only various ways of attempting to determine what a student knows and is able to do. Measurement tools or strategies are only as good as their relationship to the goals and expected outcomes which have been established for a lesson. Goals and outcomes should have been written in such a way as to encourage a broad range of assessment strategies which will measure a student's performance and knowledge of processes on a learning activity or project. These may include authentic assessment, performance assessment, systematic observations, portfolios and journals. Teachers should select an assessment strategy, or strategies, which are "most" relevant to their students' learning.
Let's examine each of these techniques:
Authentic Assessment
Performance Assessment
Criterion Referenced Assessment
Systematic Observations
Portfolios and Process-folios
Journals
Criteria for Self-Assessing your Integrated/Contextual Lesson Strategies and Tools
Authentic Assessment Description
Authentic assessment is a term which has been coined to describe alternative assessment methods. These methods should authentically allow a student to demonstrate a student's ability to perform tasks, solve problems or express knowledge in ways which simulate situations which are found in real life (Hymes, 1991). This is usually defined as life outside of the confines of the "school," that is, they are found in the "real world." These simulations should express the performance which is found in such real world practices as the work place. These simulations should require a product or performance. According to Eisner (1993), authentic assessment projects should reveal how students go about solving the problems (process) and should have more than one "correct" solution. According to Hart (1994) the assessment strategy which fits this criteria is a combination of:
performance assessment,
systematic observations, and
portfolios.
Performance Assessment
According to Wiggins (1993), performance assessments are developed to "test" the ability of students to demonstrate their knowledge and skills (what they know and can do) in a variety of "realistic" situations and contexts. Sowell (1996) states that performance assessments can be short or extended open-ended or multiple choice questions. In a more extended definition, performance assessments can be reading or writing, projects, processes, problem solving, analytical tasks, or other tasks which allow the student to demonstrate their ability to meet specified outcomes and goals.
Stiehl and Bessey (1994) describe seven factors of performance success. These seven factors actively engage the learner in the process of attaining performance success. These factors are:
We understand the performance task and expectations.
We can define what the performance task is.
We can envision what a "good" performance will look like.
We see the linkage between the task and the goals.
We believe we will be able to perform successfully.
We recognize the value and commit to the task.
We acquire the knowledge, skills and attitudes needed to perform successfully.
We practice skills and adjust according to feedback.
We demonstrate mastery of the task.
We claim mastery.
Although these factors were developed for training adults, it is believed that they provide an excellent set of benchmarks for performance tasks for children and young adults as well. Each of these factors are written from a student-centered position and are therefore in keeping with the philosophy of this course.
This list is taken from:
Stiehl, R. & Bessey, B. (1993). The green thumb myth: managing learning in high performance organizations. The Learning Organization. Corvallis, Oregon.
Criterion Referenced Assessment
In some assessment situations, the instructor may want to use a set of clearly stated criteria for evaluation. Criteria usually outlines how a student can reach "mastery" of the identified outcomes and ultimately reach the goals of the contextual lesson.
Recently criterion referenced assessment has taken on the form of scoring guides or rubrics. A scoring guide clearly establishes and describes the specific levels of achievement. The significant difference between criterion referenced assessment and "grades" is that grades are norm-referenced and scoring guides are criterion-referenced. Where we assume that students understand what an A, B, C, D, or F means based on their interpretation, grades by themselves do not describe what a "good" project, performance, and process are. Where criterion-referenced scoring systems overtly describes what "good" ones are. This criteria allows students to work toward mastery of learning tasks.
A holistic scoring guide generally describes the criteria that will be applied to determining if the learner has achieved mastery of the learning task or, the "level" of what they know and can do. Scoring guides specify to the learner what the expectations for each learning task are and how "grades" for that task would be determined. More importantly, the scoring guides provide the learner with the criteria for "what a good one is."
Scoring guides "levels" can be assigned any numeric value, but the common way to describe the levels are similar to that shown below:
6 Exemplary Response
5 Competent Response
4 Minor errors, but generally satisfactory
3 Serious errors, but nearly satisfactory
2 Begins, but fails to complete
1 Unable to begin effectively
0 No attempt
I have also seen scoring guides developed which plan for the unexpected. That is, what if a student exceeds the teachers expectations? This is a more open-ended criteria. The unexpected (learners exceed the expectations) often happens if you allow for it, and one way that you can plan for this eventuality is to develop a scoring guide like the following:
6 Exceeds Expectations
5 Excellent Response
4 Competent Response
3 Minor errors, but generally satisfactory
2 Serious errors, but nearly satisfactory
1 Begins, but fails to complete
0 No attempt, does not engage in the task
Involving Students in the Development of Assessment Criteria
In some cases teachers have involved students in the development of assessment criteria. This has the advantage of students thinking about the criteria for a project or performance and then creating the language for the scoring guide. This also has the advantage of students interpreting the criteria and posting it in a language that is meaningful to them. This process is usually guided by the teacher to insure that the learners are not developing inappropriate criteria (not in keeping with shared realities or standards) which does not effectively measure whether the student meets the outcomes and goal(s) of the contextual lesson. The downside is that this process is time consuming. It is an excellent process to use with adult learners, when developing the criteria is an important part of the learning process and insuring that the assessment process is meaningful to the learner.
Systematic Observations
Systematic observation of students is a tried and useful method for providing information about the impact of the lesson activities on the students. According to Sowell (1996) systematic observations means that "all" students are observed, they are observed often and regularly, and observations are recorded for both typical and atypical behavior. Then these observations are reflected upon by the observer and interpreted to guide students' to meeting the lesson outcomes and goal(s).
The key to useful observation is that they must be systematic. Observations are only useful if the "data" is recorded, evaluated and used to improve student performance.
Portfolios and Process-folios
Portfolios are collections of students' skills, ideas, interests, and accomplishments that span a period of time (Hart, 1994). Portfolios have been used to show student performance in many fields. Some of these include architecture, graphic arts, photography, and writing.
Recently portfolios have been used to capture a representative sample of student work in various disciplines over time. Often the student is given the opportunity to select work which they best feel represents their knowledge and efforts during a grade level. This portfolio may represent one discipline or any number of disciplines.
Another type of portfolio is termed by Zessoules and Gardner (1991) as a process-folio. A process-folio provides a repository for selected works which show the development of students' learning over time. Process-folios are intended to reveal multiple dimensions of students' learning by providing samples which display "depth, bredth, and growth" of thought processes. Portfolios also may contain teachers written observations over time. Teachers may also include statements of goals for a course or courses, and specific lesson objectives. This allows other individuals to examine the contents of the portfolio in relation to their context.
Portfolios often take on the physical appearence of folders, binders, or notebooks. We have even begun to see electronic portfolios which use multimedia and hypermedia to display the students' work.
Another practice is to have students and/or teachers reflect on the portfolio (performance over time) by examining their finished works and expressing (often in written form) how they believe they have performed over time.
Journals
Journals are a reflective process where the student thinks about the learning process and product and then writes their ideas, interests, and experiences. Journals provide a way for students to reflect and then teachers to examine this reflection and better understand the students thinking. Journals are appropriate for documenting changes in students' perceptions of themselves and their abilities (Hart, 1994).
Journals commonly take on different forms. Two of which are:
self-directed journaling, and
teacher directed journaling.
In the case of self-directed journaling, the student will determine the topic, content, and direction the reflection will take. With directed journaling, the teacher will direct the journal responses (reflection) toward a specific goal, outcome or topic. Both of these techniques have value.
Journals are time consuming but can be extremely valuable in assessing a student's perception of their experiences. This process can also be a valuable communication tool for both the student and teacher.
*********
Using Intellectual Standards to Assess Student Reasoning
by Richard Paul
To assess student reasoning requires that we focus our attention as teachers on two inter-related dimensions of reasoning. The first dimension consists of the elements of reasoning; the second dimension consists of the universal intellectual standards by which we measure student ability to use, in a skillful way, each of those elements of reasoning.
Elements of Reasoning
Once we progress from thought which is purely associational and undisciplined, to thinking which is conceptual and inferential, thinking which attempts in some intelligible way to figure something out, in short, to reasoning, then it is helpful to concentrate on what can be called "the elements of reasoning". The elements of reasoning are those essential dimensions of reasoning whenever and wherever it occurs. Working together, they shape reasoning and provide a general logic to the use of reason. We can articulate these elements by paying close attention to what is implicit in the act of figuring anything out by the use of reason. These elements, then - purpose, question at issue, assumptions, inferences, implications, point of view, concepts and evidence - constitute a central focus in the assessment of student thinking.
Standards of reasoning. When we assess student reasoning, we want to evaluate, in a reasonable, defensible, objective way, not just that students are reasoning, but how well they are reasoning. We will be assessing not just that they are using the elements of reasoning, but the degree to which they are using them well, critically, in accord with appropriate intellectual standards.
To assess a student response, whether written or oral, in structured discussion of content or in critical response to reading assignments, by how clearly or completely it states a position, is to assess it on the basis of a standard of reasoning. Similarly, assessing student work by how logically and consistently it defends its position, by how flexible and fair the student is in articulating other points of view, by how significant and realistic the student's purpose is, by how precisely and deeply the student articulates the question at issue - each of these is an evaluation based on standards of reasoning.
Distinct from such reasoning standards are other standards that teachers sometimes use to assess student work. To evaluate a student response on the basis of how concisely or elegantly it states a position is to use standards that are inappropriate to assessing student reasoning. Similarly unrelated to the assessment of reasoning is evaluating student work by how humorous, glib, personal or sincere it is, by how much it agrees with the teacher's views, by how "well-written" it is, by how exactly it repeats the teacher's words, by the mere quantity of information it contains. The danger is that such standards are often conflated with reasoning standards, often unconsciously, and students are assessed on grounds other than the degree to which they are reasoning well.
The basic conditions implicit whenever we gather, conceptualize, apply, analyze, synthesize, or evaluate information - the elements of reasoning - are as follows:
Purpose, Goal, or End in View. Whenever we reason, we reason to some end, to achieve some objective, to satisfy some desire or fulfill some need. One source of problems in student reasoning is traceable to defects at the level of goal, purpose, or end. If the goal is unrealistic, for example, or contradictory to other goals the student has, if it is confused or muddled in some way, then the reasoning used to achieve it is problematic. A teacher's assessment of student reasoning, then, necessarily involves an assessment of the student's ability to handle the dimension of purpose in accord with relevant intellectual standards. It also involves giving feedback to students about the degree to which their reasoning meets those standards. Is the student's purpose - in an essay, a research project, an oral report, a discussion - clear? Is the purpose significant or trivial or somewhere in between? Is the student's purpose, according to the most judicious evaluation on the teacher's part, realistic? Is it an achievable purpose? Does the student's overall goal dissolve in the course of the project, does it change, or is it consistent throughout? Does the student have contradictory purposes?
Question at Issue, or Problem to be Solved. Whenever we attempt to reason something out, there is at least one question at issue, at least one problem to be solved. One area of concern for assessing student reasoning, therefore, will be the formulation of the question to be answered or problem to be solved, whether with respect to the student's own reasoning or to that of others. Assessing skills of mastery of this element of reasoning requires assessing - and giving feedback on - students' ability to formulate a problem in a clear and relevant way. It requires giving students direct commentary on whether the question they are addressing is an important one, whether it is answerable, on whether they understand the requirements for settling the question, for solving the problem.
Point of View, or Frame of Reference. Whenever we reason, we must reason within some point of view or frame of reference. Any "defect" in that point of view or frame of reference is a possible source of problems in the reasoning. A point of view may be too narrow, too parochial, may be based on false or misleading analogies or metaphors, may contain contradictions, and so forth. It may be restricted or unfair. Alternatively, student reasoning involving articulation of their point of view may meet the relevant standards to a significant degree: their point of view may be broad, flexible, fair; it may be clearly stated and consistently adhered to. Feedback to students would involve commentary noting both when students meet the standards and when they fail to meet them. Evaluation of students' ability to handle the dimension of point of view would also appropriately direct students to lines of reasoning that would promote a richer facility in reasoning about and in terms of points of view.
The Empirical Dimension of Reasoning. Whenever we reason, there is some "stuff," some phenomena about which we are reasoning. Any "defect," then, in the experiences, data, evidence, or raw material upon which a person's reasoning is based is a possible source of problems. Students would be assessed and receive feedback on their ability to give evidence that is gathered and reported clearly, fairly, and accurately. Does the student furnish data at all? Is the data relevant? Is the information adequate for achieving the student's purpose? Is it applied consistently, or does the student distort it to fit her own point of view?
The Conceptual Dimension of Reasoning. All reasoning uses some ideas or concepts and not others. These concepts can include the theories, principles, axioms and rules implicit in our reasoning. Any "defect" in the concepts or ideas of the reasoning is a possible source of problems in student reasoning. Feedback to students would note whether their understanding of theories and rules was deep or merely superficial. Are the concepts they use in their reasoning clear ones? Are their ideas relevant to the issue at hand, are their principles slanted by their point of view?
Assumptions. All reasoning must begin somewhere, must take some things for granted. Any "defect" in the assumptions or presuppositions with which the reasoning begins is a possible source of problems for students. Assessing skills of reasoning involves assessing their ability to recognize and articulate their assumptions, again according to the relevant standards. The student's assumptions may be stated clearly or unclearly; the assumptions may be justifiable or unjustifiable, crucial or extraneous, consistent or contradictory. The feedback students receive from teachers on their ability to meet the relevant standards will be a large factor in the improvement of student reasoning.
Implications and Consequences. No matter where we stop our reasoning, it will always have further implications and consequences. As reasoning develops, statements will logically be entailed by it. Any "defect" in the implications or consequences of our reasoning is a possible source of problems. The ability to reason well is measured in part by an ability to understand and enunciate the implications and consequences of the reasoning. Students therefore need help in coming to understand both the relevant standards of reasoning out implications and the degree to which their own reasoning meets those standards. When they spell out the implications of their reasoning, have they succeeded in identifying significant and realistic implications, or have they confined themselves to unimportant and unrealistic ones? Have they enunciated the implications of their views clearly and precisely enough to permit their thinking to be evaluated by the validity of those implications.
Inferences. Reasoning proceeds by steps in which we reason as follows: "Because this is so, that also is so (or probably so)," or "Since this, therefore that." Any "defect" in such inferences is a possible problem in our reasoning. Assessment would evaluate students' ability to make sound inferences in their reasoning. When is an inference sound? When it meets reasonable and relevant standards of inferring. Are the inferences the student draws clear? Are they justifiable? Do they draw deep conclusions or do they stick to the trivial and superficial? Are the conclusions they draw consistent?
