- •Using the electronic version
- •Bookmarks
- •Moving around the text
- •Finding a word or phrase in the text
- •Using the hyperlinks in the text
- •Copying the text
- •Printing the text
- •CONTENTS
- •PREFATORY NOTE
- •NOTES FOR THE USER
- •SYNOPSIS
- •1 The Common European Framework in its political and educational context
- •1.2 The aims and objectives of Council of Europe language policy
- •1.4 Why is CEF needed?
- •1.5 For what uses is CEF intended?
- •1.6 What criteria must CEF meet?
- •2 Approach adopted
- •2.1.1 The general competences of an individual
- •2.1.2 Communicative language competence
- •2.1.3 Language activities
- •2.1.4 Domains
- •2.1.5 Tasks, strategies and texts
- •2.3 Language learning and teaching
- •2.4 Language assessment
- •3 Common Reference Levels
- •3.1 Criteria for descriptors for Common Reference Levels
- •3.2 The Common Reference Levels
- •3.3 Presentation of Common Reference Levels
- •3.4 Illustrative descriptors
- •Communicative activities
- •Strategies
- •3.5 Flexibility in a branching approach
- •3.6 Content coherence in Common Reference Levels
- •3.7 How to read the scales of illustrative descriptors
- •4 Language use and the language user/learner
- •4.1 The context of language use
- •4.1.1 Domains
- •4.1.2 Situations
- •4.1.3 Conditions and constraints
- •4.1.4 The user/learner’s mental context
- •4.2 Communication themes
- •4.3 Communicative tasks and purposes
- •4.3.4 Ludic uses of language
- •4.3.5 Aesthetic uses of language
- •4.4 Communicative language activities and strategies
- •4.4.1 Productive activities and strategies
- •4.4.2 Receptive activities and strategies
- •4.4.4 Mediating activities and strategies
- •4.4.5 Non-verbal communication
- •4.5 Communicative language processes
- •4.5.1 Planning
- •4.5.2 Execution
- •4.5.3 Monitoring
- •4.6 Texts
- •4.6.1 Texts and media
- •4.6.2 Media include:
- •4.6.3 Text-types include:
- •4.6.4 Texts and activities
- •5 The user/learner’s competences
- •5.1 General competences
- •5.1.1 Declarative knowledge
- •5.1.2 Skills and know-how
- •5.1.4 Ability to learn
- •5.2 Communicative language competences
- •5.2.1 Linguistic competences
- •5.2.2 Sociolinguistic competence
- •5.2.3 Pragmatic competences
- •6 Language learning and teaching
- •6.1 What is it that learners have to learn or acquire?
- •6.1.3 Plurilingual competence and pluricultural competence
- •6.1.4 Variation in objectives in relation to the Framework
- •6.2 The processes of language learning
- •6.2.1 Acquisition or learning?
- •6.2.2 How do learners learn?
- •6.3 What can each kind of Framework user do to facilitate language learning?
- •6.4 Some methodological options for modern language learning and teaching
- •6.4.1 General approaches
- •6.5 Errors and mistakes
- •7 Tasks and their role in language teaching
- •7.1 Task description
- •7.2 Task performance
- •7.2.1 Competences
- •7.2.2 Conditions and constraints
- •7.2.3 Strategies
- •7.3.1 Learner competences and learner characteristics
- •7.3.2 Task conditions and constraints
- •8.2 Options for curricular design
- •8.2.2 From the partial to the transversal
- •8.3 Towards curriculum scenarios
- •8.3.1 Curriculum and variation of objectives
- •8.3.2 Some examples of differentiated curriculum scenarios
- •8.4.1 The place of the school curriculum
- •8.4.3 A multidimensional and modular approach
- •9 Assessment
- •9.1 Introduction
- •9.2.2 The criteria for the attainment of a learning objective
- •9.3 Types of assessment
- •9.3.3 Mastery CR/continuum CR
- •9.3.5 Formative assessment/summative assessment
- •9.3.6 Direct assessment/indirect assessment
- •9.3.7 Performance assessment/knowledge assessment
- •9.3.8 Subjective assessment/objective assessment
- •9.3.9 Rating on a scale/rating on a checklist
- •9.3.10 Impression/guided judgement
- •9.3.11 Holistic/analytic
- •9.3.12 Series assessment/category assessment
- •9.4 Feasible assessment and a metasystem
- •General Bibliography
- •Descriptor formulation
- •Scale development methodologies
- •Intuitive methods:
- •Qualitative methods:
- •Quantitative methods:
- •Appendix B: The illustrative scales of descriptors
- •The Swiss research project
- •Origin and Context
- •Methodology
- •Results
- •Exploitation
- •Follow up
- •References
- •The descriptors in the Framework
- •Document B1 Illustrative scales in Chapter 4: Communicative activities
- •Document B2 Illustrative scales in Chapter 4: Communication strategies
- •Document B3 Illustrative scales in Chapter 4: Working with text
- •Document B4 Illustrative scales in Chapter 5: Communicative language competence
- •Document B5 Coherence in descriptor calibration
- •Appendix C: The DIALANG scales
- •The DIALANG project
- •The DIALANG assessment system
- •Purpose of DIALANG
- •Assessment procedure in DIALANG
- •Purpose of self-assessment in DIALANG
- •The DIALANG self-assessment scales
- •Source
- •Qualitative development
- •Translation
- •Calibration of the self-assessment statements
- •Other DIALANG scales based on the Common European Framework
- •Concise scales
- •Advisory feedback
- •References
- •Document C1 DIALANG self-assessment statements
- •Document C3 Elaborated descriptive scales used in the advisory feedback section of DIALANG
- •The ALTE Framework
- •The development process
- •Textual revision
- •Anchoring to the Council of Europe Framework
- •References
- •Document D1 ALTE skill level summaries
- •Document D2 ALTE social and tourist statements summary
- •Document D3 ALTE social and tourist statements
- •Document D4 ALTE work statements summary
- •Document D5 ALTE WORK statements
- •Document D6 ALTE study statements summary
- •Document D7 ALTE STUDY statements
- •Index
Common European Framework of Reference for Languages: learning, teaching, assessment
The main potential for self-assessment, however, is in its use as a tool for motivation and awareness raising: helping learners to appreciate their strengths, recognise their weaknesses and orient their learning more effectively.
Users of the Framework may wish to consider and where appropriate state:
•which of the types of assessment listed above are:
•• more relevant to the needs of the learner in their system
•• more appropriate and feasible in the pedagogic culture of their system
•• more rewarding in terms of teacher development through ‘washback’ effect
•the way in which the assessment of achievement (school-oriented; learning-oriented) and the assessment of proficiency (real world-oriented; outcome-oriented) are balanced and complemented in their system, and the extent to which communicative performance is assessed as well as linguistic knowledge.
•the extent to which the results of learning are assessed in relation to defined standards and criteria (criterion-referencing) and the extent to which grades and evaluations are assigned on the basis of the class a learner is in (norm-referencing).
•the extent to which teachers are:
•• informed about standards (e.g. common descriptors, samples of performance)
•• encouraged to become aware of a range of assessment techniques
•• trained in techniques and interpretation
•the extent to which it is desirable and feasible to develop an integrated approach to continuous assessment of coursework and fixed point assessment in relation to related standards and criteria definitions
•the extent to which it is desirable and feasible to involve learners in self-assessment in relation to defined descriptors of tasks and aspects of proficiency at different levels, and operationalisation of those descriptors in – for example – series assessment
•the relevance of the specifications and scales provided in the Framework to their context, and the way in which they might be complemented or elaborated.
Self-assessment and examiner versions of rating grids are presented in Table 2 and in Table 3 in Chapter 3. The most striking distinction between the two – apart from the purely surface formulation as I can do . . . or Can do . . . is that whereas Table 2 focuses on communicative activities, Table 3 focuses on generic aspects of competence apparent in any spoken performance. However, a slightly simplified self-assessment version of Table 3 can easily be imagined. Experience suggests that at least adult learners are capable of making such qualitative judgements about their competence.
9.4Feasible assessment and a metasystem
The scales interspersed in Chapters 4 and 5 present an example of a set of categories related to but simplified from the more comprehensive descriptive scheme presented in the text of Chapters 4 and 5. It is not the intention that anyone should, in a practical assessment approach, use all the scales at all the levels. Assessors find it difficult to cope
192
Assessment
with a large number of categories and in addition, the full range of levels presented may not be appropriate in the context concerned. Rather, the set of scales is intended as a reference tool.
Whatever approach is being adopted, any practical assessment system needs to reduce the number of possible categories to a feasible number. Received wisdom is that more than 4 or 5 categories starts to cause cognitive overload and that 7 categories is psychologically an upper limit. Thus choices have to be made. In relation to oral assessment, if interaction strategies are considered a qualitative aspect of communication relevant in oral assessment, then the illustrative scales contain 12 qualitative categories relevant to oral assessment:
Turntaking strategies
Co-operating strategies
Asking for clarification
Fluency
Flexibility
Coherence
Thematic development
Precision
Sociolinguistic competence
General range
Vocabulary range
Grammatical accuracy
Vocabulary control
Phonological control
It is obvious that, whilst descriptors on many of these features could possibly be included in a general checklist, 12 categories are far too many for an assessment of any performance. In any practical approach, therefore, such a list of categories would be approached selectively. Features need to be combined, renamed and reduced into a smaller set of assessment criteria appropriate to the needs of the learners concerned, to the requirements of the assessment task concerned and to the style of the pedagogic culture concerned. The resultant criteria might be equally weighted, or alternatively certain factors considered more crucial to the task at hand might be more heavily weighted.
The following four examples show ways in which this can be done. The first three examples are brief notes on the way categories are used as test criteria in existing assessment approaches. The fourth example shows how descriptors in scales in the Framework were merged and reformulated in order to provide an assessment grid for a particular purpose on a particular occasion.
193
Common European Framework of Reference for Languages: learning, teaching, assessment
Example 1:
Cambridge Certificate in Advanced English (CAE), Paper 5: Criteria for Assessment (1991)
Test criteria |
Illustrative scales |
Other categories |
|
|
|
Fluency |
Fluency |
|
|
|
|
Accuracy and range |
General range |
|
|
Vocabulary range |
|
|
Grammatical accuracy |
|
|
Vocabulary control |
|
|
|
|
Pronunciation |
Phonological control |
|
|
|
|
Task achievement |
Coherence |
Task success |
|
Sociolinguistic appropriacy |
Need for interlocutor support |
|
|
|
Interactive communication |
Turntaking strategies |
Extent and ease of maintaining |
|
Co-operative strategies |
contribution |
|
Thematic development |
|
|
|
|
Note on other categories: In the illustrative scales, statements about task success are found in relation to the kind of activity concerned under Communicative Activities. Extent and ease of contribution is included under Fluency in those scales. An attempt to write and calibrate descriptors on Need for Interlocutor Support to include in the illustrative set of scales was unsuccessful.
Example 2:
International Certificate Conference (ICC): Certificate in English for Business Purposes, Test 2: Business Conversation (1987)
Test criteria |
Illustrative scales |
Other categories |
|
|
|
|
|
Scale 1 |
(not named) |
Sociolinguistic appropriacy |
Task success |
|
|
Grammatical accuracy |
|
|
|
Vocabulary control |
|
|
|
|
|
Scale 2 |
(Use of discourse |
Turntaking strategies |
|
features to initiate and |
Co-operative strategies |
|
|
maintain flow of |
Sociolinguistic appropriacy |
|
|
conversation) |
|
|
|
|
|
|
|
194
Assessment
Example 3:
Eurocentres – Small Group Interaction Assessment (RADIO) (1987)
Test criteria |
Illustrative scales |
Other categories |
|
|
|
Range |
General range |
|
|
Vocabulary range |
|
|
|
|
Accuracy |
Grammatical accuracy |
|
|
Vocabulary control |
|
|
Socio-linguistic appropriacy |
|
|
|
|
Delivery |
Fluency |
|
|
Phonological control |
|
|
|
|
Interaction |
Turntaking strategies |
|
|
Co-operating strategies |
|
|
|
|
Example 4:
Swiss National Research Council: Assessment of Video Performances
Context: The illustrative descriptors were scaled in a research project in Switzerland as explained in Appendix A. At the conclusion of the research project, teachers who had participated were invited to a conference to present the results and to launch experimentation in Switzerland with the European Language Portfolio. At the conference, two of the subjects of discussion were (a) the need to relate continuous assessment and selfassessment checklists to an overall framework, and (b) the ways in which the descriptors scaled in the project could be exploited in different ways in assessment. As part of this process of discussion, videos of some of the learners in the survey were rated onto the assessment grid presented as Table 3 in Chapter 3. It presents a selection from the illustrative descriptors in a merged, edited form.
Test criteria |
Illustrative scales |
Other categories |
|
|
|
Range |
General range |
|
|
Vocabulary range |
|
|
|
|
Accuracy |
Grammatical accuracy |
|
|
Vocabulary control |
|
|
|
|
Fluency |
Fluency |
|
|
|
|
Interaction |
Global interaction |
|
|
Turntaking |
|
|
Co-operating |
|
|
|
|
Coherence |
Coherence |
|
|
|
|
195
Common European Framework of Reference for Languages: learning, teaching, assessment
Different systems with different learners in different contexts simplify, select and combine features in different ways for different kinds of assessment. Indeed rather than being too long, the list of 12 categories is probably unable to accommodate all the variants people choose, and would need to be expanded to be fully comprehensive.
Users of the Framework may wish to consider and where appropriate state:
•the way in which theoretical categories are simplified into operational approaches in their system;
•the extent to which the main factors used as assessment criteria in their system can be situated in the set of categories introduced in Chapter 5 for which sample scales are provided in the Appendix, given further local elaboration to take account of specific domains of use.
196