
- •Contents
- •Series Preface
- •Acknowledgments
- •Purposes and Uses of Achievement Tests
- •Diagnosing Achievement
- •Identifying Processes
- •Analyzing Errors
- •Making Placement Decisions and Planning Programs
- •Measuring Academic Progress
- •Evaluating Interventions or Programs
- •Conducting Research
- •Screening
- •Selecting an Achievement Test
- •Administering Standardized Achievement Tests
- •Testing Environment
- •Establishing Rapport
- •History and Development
- •Changes From KTEA-II to KTEA-3
- •Subtests
- •Mapping KTEA-3 to Common Core State Standards
- •Standardization and Psychometric Properties of the KTEA-3
- •Standardization
- •Reliability
- •Validity
- •Overview of the KTEA-3 Brief Form
- •Brief Form Standardization and Technical Characteristics
- •How to Administer the KTEA-3
- •Starting and Discontinuing Subtests
- •Sample, Teaching, and Practice Items
- •Recording Responses
- •Timing
- •Queries and Prompts
- •Subtest-by-Subtest Notes on Administration
- •How to Score the KTEA-3
- •Types of Scores
- •Subtest-by-Subtest Scoring Keys
- •How to Interpret the KTEA-3
- •Introduction to Interpretation
- •Step 1: Interpret the Academic Skills Battery (ASB) Composite
- •Step 2: Interpret Other Composite Scores and Subtest Scores
- •Subtest Floors and Ceilings
- •Interpretation of Composites
- •Clinical Analysis of Errors
- •Qualitative Observations
- •Using the KTEA-3 Across Multiple Administrations
- •Repeated Administrations of the Same Form
- •Administering Alternate Forms
- •Using the KTEA-3 Brief Form
- •Progress Monitoring
- •Screening for a Comprehensive Evaluation
- •KTEA-3 Score Reports
- •History and Development
- •Changes From WIAT-II to WIAT-III
- •Age Range
- •New and Modified Subtests
- •Composites
- •Administration and Scoring Rules
- •Skills Analysis
- •Intervention Goal Statements
- •New Analyses
- •New Scores
- •Validity Studies
- •Materials
- •Scoring and Reporting
- •Description of the WIAT-III
- •Subtests With Component Scores
- •Mapping WIAT-III to Common Core State Standards
- •Standardization and Psychometric Properties of the WIAT-III
- •Standardization
- •Reliability
- •Validity
- •Starting and Discontinuing Subtests
- •Sample, Teaching, and Practice Items
- •Recording Responses
- •Timing
- •Queries and Prompts
- •Subtest-by-Subtest Notes on Administration
- •How to Score the WIAT-III
- •Types of Scores
- •Score Reports
- •Subtest-by-Subtest Scoring Keys
- •Listening Comprehension
- •Early Reading Skills
- •Reading Comprehension
- •Sentence Composition
- •Word Reading and Pseudoword Decoding
- •Essay Composition
- •Numerical Operations
- •Oral Expression
- •Oral Reading Fluency
- •Spelling
- •Math Fluency—Addition, Subtraction, and Multiplication
- •Introduction to Interpretation
- •Step 1: Interpret the Composite Scores
- •Subtest Floors and Ceilings
- •Skills Analysis
- •Intervention Goal Statements
- •Qualitative Data
- •Using the WIAT-III Across Multiple Administrations
- •Linking Studies
- •Overview of the WISC-V, WISC-V Integrated, and KABC-II
- •Qualitative/Behavioral Analyses of Assessment Results
- •Identification of Specific Learning Disabilities
- •Interpretation and Use of Three New Composite Scores
- •Accommodations for Visual, Hearing, and Motor Impairments
- •Ongoing Research on Gender Differences in Writing and the Utility of Error Analysis
- •Female Advantage in Writing on KTEA-II Brief and Comprehensive Forms
- •Strengths and Weaknesses of the KTEA-3
- •Assets of the KTEA-3
- •Test Development
- •Two Forms
- •Standardization
- •Reliability and Validity
- •Administration and Scoring
- •Interpretation
- •Phonological Processing
- •KTEA-3 Flash Drive
- •Limitations of the KTEA-3
- •Test Development
- •Standardization
- •Reliability and Validity
- •Administration and Scoring
- •Test Items
- •Interpretation
- •Final Comment
- •Strengths and Weaknesses of the WIAT-III
- •Assets of the WIAT-III
- •Test Development
- •Normative Sample
- •Reliability and Validity
- •Administration and Scoring
- •Interpretation
- •Better Listening Comprehension Measure
- •Technical Manual
- •Limitations of the WIAT-III
- •Floor and Ceiling
- •Test Coverage
- •Poor Instructions for Scoring Certain Tasks
- •Item Scoring
- •Audio Recorder
- •Final Comment
- •Content Coverage of the KTEA-3 and WIAT-III
- •Case Report 1: Jenna
- •Reason for Evaluation
- •Background Information
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Test Results
- •Neuropsychological Implications and Diagnostic Impressions
- •Recommendations
- •Psychometric Summary for Jenna
- •Case Report 2: Oscar
- •Reason for Evaluation
- •Background Information
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Test Results
- •Diagnostic Summary
- •Recommendations
- •Resources
- •Psychometric Summary for Oscar
- •Case Report 3: Rob
- •Purpose of the Evaluation
- •History and Background
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Results
- •Summary and Diagnostic Impressions
- •Recommendations
- •Psychometric Summary for Rob
- •Q-interactive Versus Q-global
- •Equivalency Studies
- •Essential Features of Q-interactive
- •Key Terminology
- •Central Website
- •Assess Application
- •References
- •Annotated Bibliography
- •About the Authors
- •About the Digital Resources
- •Index

310 ESSENTIALS OF KTEA™-3 AND WIAT®-III ASSESSMENT
correlation (e.g., .46 in grade 5) between the subtests, so a di erence of 25 points is needed for a base rate ≤10% in grades 1 through 6.
The Growth Scale Values (GSV) seem to be potentially very useful, and the Technical & Interpretive Manual does provide more information on their use (pp. 36–37, 98–100, and 678–679) than did the KTEA-II Manual. However, there does not appear to be any provision on the record form or the materials provided on the flash drive, for recording or interpreting Growth Scale Values when one is hand-scoring the KTEA-3 (the online scoring provides these data).
If, for some unusual reason, an examiner wished to report grade-equivalent or age-equivalent scores, it would be helpful to report them as confidence bands rather than as single points. To do this, the examiner must translate the standard scores at the end points of the standard score confidence bands back to raw scores and then convert those raw scores to ageor grade-equivalent scores, a laborious process often requiring interpolation. In fairness, we know of no test that simplifies, encourages, or even mentions computation of confidence bands for ageand grade-equivalent scores.
Assessment of oral language is always a vexatious issue. Oral language is tested, of course, in oral language tests, such as the Clinical Evaluation of Language Fundamentals (CELF-5; Wiig, Semel, & Secord, 2013), Oral and Written Language Scales (OWLS-II; Carrow-Woolfolk, 2012), or Comprehensive Assessment of Spoken Language (CASL; Carrow-Woolfolk, 1999), where it is treated as a separate domain. Oral language is also assessed by most cognitive111111111 ability measures, such as the Di erential Ability Scales (DAS-II; Elliott, 2007) or Wechsler Intelligence Scale for Children (WISC-V; Wechsler, 2014a), where it is treated as one broad ability within overall cognitive functioning. Oral language is also included in tests of academic achievement, such as the WIAT-III and KTEA-3, where it is a domain of school achievement, comparable to reading or math. This situation is, at best, confusing to evaluators when they attempt to decide whether oral expression or listening comprehension on an achievement test is weaker than would be expected from the student’s scores on cognitive ability tests that require very similar speaking and listening skills.
In our opinion, e orts to create good tests of oral language as aspects of academic achievement, as opposed to tests of oral language as its own domain or oral language as part of cognitive abilities, have met with only moderate success. The KTEA-3 oral language and oral fluency subtests strike us as being as good as any currently available oral language academic achievement tests and better than most, but the reliability and validity coe cients are much weaker than those for most other KTEA-3 subtests (see Chapter 2 and Rapid Reference 5.1).
FINAL COMMENT
Although we have identified some concerns with the KTEA-3, we consider it to be one of the best currently available comprehensive achievement batteries. It preserves

STRENGTHS AND WEAKNESSES OF THE KTEA™-3 AND WIAT®-III 311
the best aspects of the KTEA and KTEA-II, and the changes are, in our opinion, significant improvements.
STRENGTHS AND WEAKNESSES OF THE WIAT-III
We previously reviewed the WIAT-III (Lichtenberger & Breaux, 2010, pp. 239–269) and concluded, “that, all in all, the WIAT-III is a very good instrument … a vast improvement over the prior edition . . . . T he WIAT-III provides an e cient, thorough, reasonable and statistically reliable and valid assessment of academic abilities. It is designed with features that enhance useful interpretation” (pp. 239–240). The authors of this chapter still believe that the WIAT-III (Pearson, 2009a) is a very good instrument. Although we find the test to be overall a vast improvement over the prior edition and have welcomed many of the changes in the new edition, we still find several aspects of the test annoying or problematic. Admittedly, we are easily annoyed. The WIAT-III provides an e cient, thorough, reasonable, and statistically reliable and valid assessment of academic abilities. It is designed with features that enhance useful interpretation. Rapid Reference 5.2 provides a summary of the WIAT-III Strengths and Weaknesses. Because of limitations to the length of this chapter, several but not all points will be elaborated on below.
|
22222222 |
|
Rapid Reference 5.2 |
|
111111111 |
..................................................................................................................... |
|
Strengths and Weaknesses of the WIAT-III |
|
Strengths |
Weaknesses |
|
Test Development |
•Improved the floor and ceiling of several subtests.
•Expanded the number and type of subtests to measure all eight areas of achievement specified by IDEA 2004 legislation.
•New or newly separate subtests include Early Reading Skills, Alphabet Writing Fluency, Sentence Composition, Essay Composition, Oral Reading Fluency, and three Math Fluency subtests.
•Limited ceiling for some subtests for the oldest students in the Above Average range. Although most subtests have a ceiling that is at least 2 SD above the mean, the ceilings for Sentence Building, Word Reasoning, Expressive Vocabulary, Oral Reading Accuracy, and Math Fluency (Addition, Subtraction, Multiplication) range from 117 to 128. At the highest age range, 15 of the 26 possible subtests have ceilings ≤144.
(continued)

312 ESSENTIALS OF KTEA™-3 AND WIAT®-III ASSESSMENT
(Continued)
Strengths |
Weaknesses |
Test Development
•The Spelling subtest no longer includes an excessive number of potential homonym confusions.
•Oral Discourse Comprehension is presented from a CD recording rather than being read aloud by the examiner.
•Extensive tryout data using samples with approximately proportional representation by sex and ethnicity.
•Easel format for presenting many subtest items.
•Error analysis procedures were expanded.
•Established links with the most recent Wechsler scales (i.e., WPPSI-III, WPPSI-IV, WISC-IV, WISC-V, WAIS-IV, 22222222111111111 WNV), and the DAS-II to enable clinicians to compare ability and achievement scores.
•Limited floor for some subtests for the youngest students with the lowest ability levels. For students age 6, raw scores of 0 correspond to standard scores in the “average range” (i.e., SS > 85) for Sentence Combining, Sentence Building, and Math Fluency (subtraction).
Standardization
•The standardization sample is well stratified to match the U.S. population.
•Data were obtained from a stratified sample of 2,775 students in grades pre-K–12 (1,375 in the spring of 2008 and 1,400 students in the fall of 2008). From the overall grade sample, an overlapping stratified sample of 1,826 students ranging in age from 4 through 19 was also obtained.
•The standardization sample included students receiving special education services.
•We hope that examiners will take the time to read, study, and annotate their PDF copies of the Technical Manual.
•There are no college-age norms provided.
(continued)

STRENGTHS AND WEAKNESSES OF THE KTEA™-3 AND WIAT®-III 313
(Continued)
Strengths |
Weaknesses |
Standardization
•Very detailed information on standardization procedures and sample are included in the manual.
•Extensive (over 500 pages) Technical Manual is provided as a “green” PDF file rather than a printed manual.
Reliability and Validity
•The average reliability coefficients for the WIAT–III composite scores are all excellent (.91–.98).
•Average subtest reliability coefficients range from good (.83–.89) to excellent (.90–.97), with the exception of Alphabet Writing Fluency, which has an average reliability of .69.
22222222111111111
•Correlations between the WIAT-III Total Achievement score and overall cognitive ability scores range from .60 to .82 and are consistent with research-based expectations regarding typical correlations between ability and achievement measures. Such correlations provide evidence of divergent validity, suggesting that different constructs are being measured by the WIAT-III from those measured by each ability test.
•Strong face validity based on item content and skill development in reading, writing, and mathematics.
•Construct validity indicated by increasing scores across grades and ages.
•No alternative forms are available.
•Standard Errors of Measurement (SEm) are high for the Alphabet Writing Fluency (8.35 for Fall, Spring, and Age samples).
•The only academic achievement comparison was to the WIAT-II when the WIAT-III initially published (the KTEA-3 later provided correlations with the WIAT-III).
(continued)

314 ESSENTIALS OF KTEA™-3 AND WIAT®-III ASSESSMENT
(Continued)
Strengths |
Weaknesses |
|
Administration and Scoring |
||
|
|
|
• Easel format with tabbed separators |
• The test record form’s layout is |
|
simplifies administration. |
very busy and crowded in places – |
|
• Starting points and discontinue rules are |
particularly on the first two Score |
|
Summary pages. |
||
clearly labeled on the record form for all |
||
|
||
subtests. |
• The record form itself is 50 pages |
|
• Many test items are scored in a dichoto- |
long. For those using the WIAT-III with |
|
younger children (pre-K and K), much |
||
mous manner. |
||
of the record form is wasted (a sepa- |
||
• Both age and grade norms are provided. |
||
rate Pre-K/K record form is available |
||
• In an effort to improve ease of admin- |
||
for purchase). |
||
istration and shorten administration |
• When hand scoring and filling in the |
|
times, the same reverse rule and dis- |
||
summary pages, great care must be |
||
continue rule is used across all applicable |
||
taken to avoid error. Normative tables |
||
WIAT-III subtests, and the discontinue |
||
are provided as a PDF file, which makes |
||
rule has been shortened to 4 consecu- |
||
looking up score conversions awkward |
||
tive scores of 0. |
||
and at times difficult to read, especially |
||
• New descriptive categories reflect a sim- |
||
on a small laptop screen (some tables |
||
22222222 |
|
|
111111111 |
|
|
pler system of categorizing achievement |
are printed in landscape mode, not |
|
level based on the test’s SD of 15. |
portrait layout). |
|
• Scoring rules were improved in response |
• Not all correct or acceptable answers |
|
to scoring studies, theoretical reviews by |
are provided on the record form. |
|
expert researchers, and usability reviews |
Examiners must be diligent to check |
|
by teachers and clinicians. |
the Examiner’s Manual (Appendix B) |
|
• There are sufficient scoring examples |
and the Scoring Workbook for guidance. |
|
and a 118-page Scoring Workbook to |
• Several questions appear to have more |
|
assist examiners in learning and using |
correct answers than are listed in the |
|
the scoring rules. |
record form. |
|
• Q-global and the Scoring Assistant com- |
• The new descriptive categories are |
|
puter program are valuable tools for |
different than those from the earlier |
|
score calculation, graphs, and subtest |
versions of the WIAT as well as many |
|
comparisons, saving time and eliminating |
other achievement and ability mea- |
|
possible human-made clerical errors. |
sures, which may be confusing to those |
|
• Error analysis for the subtests that have |
familiar with the old categories. The |
|
large numbers of skill categories is pro- |
undifferentiated “Average” range, |
|
vided, both in the computer Scoring |
extending from 85 to 115 may make |
|
Assistant CD and as reproducible work- |
sense statistically and psychologically, |
|
sheets in the examiner’s manual. |
but we find it much too broad (more |
|
|
than 2/3 of the population) for inter- |
|
|
preting educational performance. |
(continued)