
- •Contents
- •Series Preface
- •Acknowledgments
- •Purposes and Uses of Achievement Tests
- •Diagnosing Achievement
- •Identifying Processes
- •Analyzing Errors
- •Making Placement Decisions and Planning Programs
- •Measuring Academic Progress
- •Evaluating Interventions or Programs
- •Conducting Research
- •Screening
- •Selecting an Achievement Test
- •Administering Standardized Achievement Tests
- •Testing Environment
- •Establishing Rapport
- •History and Development
- •Changes From KTEA-II to KTEA-3
- •Subtests
- •Mapping KTEA-3 to Common Core State Standards
- •Standardization and Psychometric Properties of the KTEA-3
- •Standardization
- •Reliability
- •Validity
- •Overview of the KTEA-3 Brief Form
- •Brief Form Standardization and Technical Characteristics
- •How to Administer the KTEA-3
- •Starting and Discontinuing Subtests
- •Sample, Teaching, and Practice Items
- •Recording Responses
- •Timing
- •Queries and Prompts
- •Subtest-by-Subtest Notes on Administration
- •How to Score the KTEA-3
- •Types of Scores
- •Subtest-by-Subtest Scoring Keys
- •How to Interpret the KTEA-3
- •Introduction to Interpretation
- •Step 1: Interpret the Academic Skills Battery (ASB) Composite
- •Step 2: Interpret Other Composite Scores and Subtest Scores
- •Subtest Floors and Ceilings
- •Interpretation of Composites
- •Clinical Analysis of Errors
- •Qualitative Observations
- •Using the KTEA-3 Across Multiple Administrations
- •Repeated Administrations of the Same Form
- •Administering Alternate Forms
- •Using the KTEA-3 Brief Form
- •Progress Monitoring
- •Screening for a Comprehensive Evaluation
- •KTEA-3 Score Reports
- •History and Development
- •Changes From WIAT-II to WIAT-III
- •Age Range
- •New and Modified Subtests
- •Composites
- •Administration and Scoring Rules
- •Skills Analysis
- •Intervention Goal Statements
- •New Analyses
- •New Scores
- •Validity Studies
- •Materials
- •Scoring and Reporting
- •Description of the WIAT-III
- •Subtests With Component Scores
- •Mapping WIAT-III to Common Core State Standards
- •Standardization and Psychometric Properties of the WIAT-III
- •Standardization
- •Reliability
- •Validity
- •Starting and Discontinuing Subtests
- •Sample, Teaching, and Practice Items
- •Recording Responses
- •Timing
- •Queries and Prompts
- •Subtest-by-Subtest Notes on Administration
- •How to Score the WIAT-III
- •Types of Scores
- •Score Reports
- •Subtest-by-Subtest Scoring Keys
- •Listening Comprehension
- •Early Reading Skills
- •Reading Comprehension
- •Sentence Composition
- •Word Reading and Pseudoword Decoding
- •Essay Composition
- •Numerical Operations
- •Oral Expression
- •Oral Reading Fluency
- •Spelling
- •Math Fluency—Addition, Subtraction, and Multiplication
- •Introduction to Interpretation
- •Step 1: Interpret the Composite Scores
- •Subtest Floors and Ceilings
- •Skills Analysis
- •Intervention Goal Statements
- •Qualitative Data
- •Using the WIAT-III Across Multiple Administrations
- •Linking Studies
- •Overview of the WISC-V, WISC-V Integrated, and KABC-II
- •Qualitative/Behavioral Analyses of Assessment Results
- •Identification of Specific Learning Disabilities
- •Interpretation and Use of Three New Composite Scores
- •Accommodations for Visual, Hearing, and Motor Impairments
- •Ongoing Research on Gender Differences in Writing and the Utility of Error Analysis
- •Female Advantage in Writing on KTEA-II Brief and Comprehensive Forms
- •Strengths and Weaknesses of the KTEA-3
- •Assets of the KTEA-3
- •Test Development
- •Two Forms
- •Standardization
- •Reliability and Validity
- •Administration and Scoring
- •Interpretation
- •Phonological Processing
- •KTEA-3 Flash Drive
- •Limitations of the KTEA-3
- •Test Development
- •Standardization
- •Reliability and Validity
- •Administration and Scoring
- •Test Items
- •Interpretation
- •Final Comment
- •Strengths and Weaknesses of the WIAT-III
- •Assets of the WIAT-III
- •Test Development
- •Normative Sample
- •Reliability and Validity
- •Administration and Scoring
- •Interpretation
- •Better Listening Comprehension Measure
- •Technical Manual
- •Limitations of the WIAT-III
- •Floor and Ceiling
- •Test Coverage
- •Poor Instructions for Scoring Certain Tasks
- •Item Scoring
- •Audio Recorder
- •Final Comment
- •Content Coverage of the KTEA-3 and WIAT-III
- •Case Report 1: Jenna
- •Reason for Evaluation
- •Background Information
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Test Results
- •Neuropsychological Implications and Diagnostic Impressions
- •Recommendations
- •Psychometric Summary for Jenna
- •Case Report 2: Oscar
- •Reason for Evaluation
- •Background Information
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Test Results
- •Diagnostic Summary
- •Recommendations
- •Resources
- •Psychometric Summary for Oscar
- •Case Report 3: Rob
- •Purpose of the Evaluation
- •History and Background
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Results
- •Summary and Diagnostic Impressions
- •Recommendations
- •Psychometric Summary for Rob
- •Q-interactive Versus Q-global
- •Equivalency Studies
- •Essential Features of Q-interactive
- •Key Terminology
- •Central Website
- •Assess Application
- •References
- •Annotated Bibliography
- •About the Authors
- •About the Digital Resources
- •Index

296 ESSENTIALS OF KTEA™-3 AND WIAT®-III ASSESSMENT
(Continued)
Strengths |
Weaknesses |
Interpretation
•Significance and base-rate levels are provided for those and other comparisons.
•Growth Scale Values (GSV) are provided to provide a mechanism by which a student’s absolute (rather than relative) level of performance can be measured.
•The Manual encourages examiners to verify hypotheses with other data.
•The KTEA-3 Analysis & Comparisons Form provides a place to record the basic analysis of the scale composites, including normative and personal strengths and weaknesses, composite comparisons, subtest comparisons, and ability-achievement discrepancies. The specific comparisons are helpfully labeled with the identifying numbers of the relevant tables in the Manual.
•Tables are provided in the Technical & Interpretive Manual for predicting KTEA-3 standard scores from ability standard scores based upon the correlation between the tests.
•Confidence bands can be calculated for ageor grade-equivalent scores (if those scores must be used at all), but this procedure is laborious and is not suggested in the Manual.
22222222111111111
ASSETS OF THE KTEA-3
Test Development
Although, as with any test, there are a few weak or debatable items, the publisher’s extensive e orts to enlist guidance from curriculum experts, to base items on popular textbooks and to complete a very thorough tryout produced an instrument with very useful, relevant content for educational assessment in all eight achievement areas specified by IDEA 2004 regulations for assessment of specific learning disabilities. Reading and reading-related skills are covered extensively, including normed tests of

STRENGTHS AND WEAKNESSES OF THE KTEA™-3 AND WIAT®-III 297
word and nonsense word reading, oral and silent reading fluency, phonetic decoding, phonological processing, and rapid naming and a criterion referenced letter naming measure. This is much broader coverage than is found on most broad spectrum achievement tests and many reading tests. Reading comprehension is now measured with two subtests: Reading Comprehension and Reading Vocabulary (which we consider an extremely valuable addition to the KTEA-II selection of subtests). Reading comprehension questions are read by the examinee and themselves pose appropriate reading challenges. In a few cases, the questions appear to be more di cult than the passages (which would be a problem for a test of content knowledge but adds to the depth of measurement of a reading comprehension test). Unlike some other tests (see, for example, Keenan & Betjemann, 2006), very few KTEA-3 reading comprehension questions can be answered correctly on the basis of prior knowledge and deduction without reading the passage. Reading Comprehension also includes some direction-following items, similar to “Turn around in your chair and then stand up.”
Three subtests now measure aspects of written expression. The Written Expression subtest format of having the examinee fill in missing letters, words, punctuation, and sentences in a story book as the examiner tells the story and gives instructions, and then write a summary of the entire story is engaging, and it produces useful data, including the error analysis. There are four di erent Written Expression levels, three of which require separate storybooks. The need to keep three di erent storybooks in stock is a mild but worthwhile inconvenience111111111 . The artwork in the test easels and the storybook is attractive and appears to appeal to the students whom we have tested. The Spelling subtest begins with letteror word-writing items with pictures and continues with words presented in the traditional and e ective format of the examiner saying the word, using it in a prescribed sentence (printed on the record form), and repeating the word. The Writing Fluency subtest (new to the KTEA-3) asks the examinee to write “good, short” sentences about what is happening in each picture in the response booklet. After the sample and teaching items, the examinee is told to “Keep your sentences short and work as fast as you can.” We strongly approve of fluency tests that explicitly tell the examinee to work fast (which is not the case with many tests).
The extensive error analysis procedures do require some time and attention if done by hand, but they provide genuinely useful information if the norms are used. The error categories are detailed and educationally meaningful. Error analysis worksheets are included in the KTEA-3 flash drive. The norms tables for the error analyses (Technical & Interpretive Manual, Appendix H) are clear and helpful. Breaux (2015) provides clear instructions on how to perform error analyses on subtests that use item sets when examinees reverse to a lower level item set. Examiners can use the analyses when necessary and omit them when there are no unanswered questions about remediation, as in the case of a high score or when an examinee’s weakness in a domain is obvious, extreme, and pervasive. Unlike error analyses on many tests, the KTEA-3 weakness-average-strength categories are based on test norms rather than

298 ESSENTIALS OF KTEA™-3 AND WIAT®-III ASSESSMENT
arbitrary cuto s. It is essential that examiners use the norms, which requires some initial bullying of graduate students and supervisees. Raw scores in error categories (like any raw scores) can be very misleading.
During the KTEA-3 standardization, the KTEA-3 and KABC-II (Kaufman & Kaufman, 2004b) were administered to 99 examinees and the KTEA-3 and Di erential Ability Scales (DAS-II; Elliott, 2007) to 122 examinees. Data from those studies were used to create tables of correlations between the several composite ability scores of the KABC-II and DAS-II and all of the KTEA composites and subtests. Those correlations can be used to look up predicted achievement scores and significant di erence and base rate data for predicted and actual achievement. These easily used tables are very valuable. Our only complaints are that there are also tables for simple di erences, which might inspire someone to actually use the simple-di erence method (see McLeod, 1968, 1974), and that similar tables are not provided for the Wechsler Preschool and Primary Scale of Intelligence (4th ed.) (WPPSI-IV; Wechsler, 2012a), Wechsler Intelligence Scale for Children (4th ed.) (WISC-IV; Wechsler, 2003), Wechsler Adult Intelligence Scale (4th ed.) (WAIS-IV; Wechsler, 2008), and other popular tests such as the Woodcock-Johnson IV Tests of Cognitive Abilities (WJ IV COG; Schrank, McGrew, & Mather, 2014a). Those data would be very helpful to examiners who administer the KTEA-3 while another examiner selects and administers a test of cognitive ability. The WISC-V Technical and
Interpretive Manual (Wechsler, 2014b, pp. 96–101) provides WISC-V correlations
111111111
with the KTEA-3.
Two Forms
The availability of two forms (A and B) of the KTEA-3 is extremely helpful in many situations. As discussed below, with very few exceptions, the two forms are statistically equivalent, so examiners bent on retesting students can alternate forms. An alternate form is also extremely useful when there is reason to doubt the validity of scores in a recent assessment.
We do need to o er one caveat. Even with alternate forms, individually administered achievement tests, such as the KTEA-3, WIAT-III, or WJ-IV ACH (Schrank, Mather, & McGrew, 2014) are not well suited for measuring short-term academic progress, especially in the higher grades. Table 5.1 shows the raw scores equivalent to a standard score of 100 by Winter norms for grades 1 through 12 on five of the KTEA-3 subtests without time limits or weighted or transformed scores. In grades 1 through 4, expected annual growth on these five subtests was 4.5 to 11 raw score points. In grades 5 through 8, however, the average student gained only 1.5 to 5.5 raw score points per year. In high school, expected annual growth was only 0 to 3.5 raw score points. With such small anticipated annual gains to maintain a standard score of 100, it could be di cult to distinguish growth or stagnation from random variation in test scores, especially in the higher grades. This is not a flaw of the KTEA-3, which

STRENGTHS AND WEAKNESSES OF THE KTEA™-3 AND WIAT®-III 299
Table 5.1 Raw Scores Corresponding to Standard Scores of 100 by Winter Grade Norms on the KTEA-3 Form A: Five Untimed Subtests Without Weighted or Transformed Scores
|
Letter-Word |
Nonsense |
Math |
|
Math |
|
|
|||
|
|
Concepts & |
Spelling |
|||||||
|
Identification |
Word Decoding |
Computation |
Applications |
||||||
Grade |
Raw |
|
Raw |
|
Raw |
|
Raw |
|
Raw |
|
Score |
Growth |
Score |
Growth |
Score Growth |
Score Growth |
Score Growth |
||||
1 |
38.5 |
|
11 |
|
22.7 |
|
31.5 |
|
22 |
|
2 |
49.5 |
11.0 |
16 |
5.0 |
29 |
6.3 |
40 |
8.5 |
32 |
10.0 |
3 |
57 |
7.5 |
21 |
5.0 |
34.5 |
5.5 |
48 |
8.0 |
39.5 |
7.5 |
4 |
62 |
5.0 |
25.5 |
4.5 |
41 |
6.5 |
54 |
6.0 |
45.5 |
6.0 |
5 |
66 |
4.0 |
29.5 |
4.0 |
46.5 |
5.5 |
59 |
5.0 |
49 |
3.5 |
6 |
69.5 |
3.5 |
32 |
2.5 |
52 |
5.5 |
63 |
4.0 |
52 |
3.0 |
7 |
71.5 |
2.0 |
34 |
2.0 |
56.5 |
4.5 |
66 |
3.0 |
54 |
2.0 |
8 |
74 |
2.5 |
36 |
2.0 |
61 |
4.5 |
68 |
2.0 |
55.5 |
1.5 |
9 |
77.5 |
3.5 |
38 |
2.0 |
63 |
2.0 |
69.5 |
1.5 |
57 |
1.5 |
10 |
79.5 |
2.0 |
39.5 |
1.5 |
64 |
1.0 |
71 |
1.5 |
58 |
1.0 |
11 |
81 |
1.5 |
40.5 |
1.0 |
64 |
0.0 |
71.5 |
0.5 |
59 |
1.0 |
12 |
83 |
2.0 |
41.3 |
0.8 |
65 |
1.0 |
72 |
0.5 |
59.7 |
0.7 |
Note: From Tables D.2 of the KTEA-3 Technical & Interpretive Manual (Kaufman & Kaufman, 2014b). Raw scores with tenths shown as decimals111111111 were interpolated.
is better than many individually administered achievement tests in this regard, but a warning to be very cautious in using any individually administered test for measuring annual academic progress. Curriculum-based measurement (CBM) can be used as frequently as desired and may be a better way of measuring short-term progress. Group, multiple-choice achievement tests that take hours or days to administer have more items and, therefore, expected growth of more items per year than do individually administered achievement tests. The KTEA-3 does provide examiners with an additional way to track growth across time: Growth Scale Values (GSV). The KTEA-3 GSVs are similar to the GSV scales on the WIAT-III and the Woodcock Reading Mastery Tests-Third Edition (WRMT-III; Woodcock, 2011). The KTEA-3 GSVs are equal-interval scale and they can be used to monitor progress over time.
Standardization
Please see Chapter 2 for detailed information on the standardization of the KTEA-3, which is also described at length in Chapter 1 of the Technical & Interpretive Manual (Kaufman & Kaufman with Breaux, 2014b). The large, national sample of students tested in the fall and in the spring was stratified to match then-current U. S.