
- •Contents
- •Series Preface
- •Acknowledgments
- •Purposes and Uses of Achievement Tests
- •Diagnosing Achievement
- •Identifying Processes
- •Analyzing Errors
- •Making Placement Decisions and Planning Programs
- •Measuring Academic Progress
- •Evaluating Interventions or Programs
- •Conducting Research
- •Screening
- •Selecting an Achievement Test
- •Administering Standardized Achievement Tests
- •Testing Environment
- •Establishing Rapport
- •History and Development
- •Changes From KTEA-II to KTEA-3
- •Subtests
- •Mapping KTEA-3 to Common Core State Standards
- •Standardization and Psychometric Properties of the KTEA-3
- •Standardization
- •Reliability
- •Validity
- •Overview of the KTEA-3 Brief Form
- •Brief Form Standardization and Technical Characteristics
- •How to Administer the KTEA-3
- •Starting and Discontinuing Subtests
- •Sample, Teaching, and Practice Items
- •Recording Responses
- •Timing
- •Queries and Prompts
- •Subtest-by-Subtest Notes on Administration
- •How to Score the KTEA-3
- •Types of Scores
- •Subtest-by-Subtest Scoring Keys
- •How to Interpret the KTEA-3
- •Introduction to Interpretation
- •Step 1: Interpret the Academic Skills Battery (ASB) Composite
- •Step 2: Interpret Other Composite Scores and Subtest Scores
- •Subtest Floors and Ceilings
- •Interpretation of Composites
- •Clinical Analysis of Errors
- •Qualitative Observations
- •Using the KTEA-3 Across Multiple Administrations
- •Repeated Administrations of the Same Form
- •Administering Alternate Forms
- •Using the KTEA-3 Brief Form
- •Progress Monitoring
- •Screening for a Comprehensive Evaluation
- •KTEA-3 Score Reports
- •History and Development
- •Changes From WIAT-II to WIAT-III
- •Age Range
- •New and Modified Subtests
- •Composites
- •Administration and Scoring Rules
- •Skills Analysis
- •Intervention Goal Statements
- •New Analyses
- •New Scores
- •Validity Studies
- •Materials
- •Scoring and Reporting
- •Description of the WIAT-III
- •Subtests With Component Scores
- •Mapping WIAT-III to Common Core State Standards
- •Standardization and Psychometric Properties of the WIAT-III
- •Standardization
- •Reliability
- •Validity
- •Starting and Discontinuing Subtests
- •Sample, Teaching, and Practice Items
- •Recording Responses
- •Timing
- •Queries and Prompts
- •Subtest-by-Subtest Notes on Administration
- •How to Score the WIAT-III
- •Types of Scores
- •Score Reports
- •Subtest-by-Subtest Scoring Keys
- •Listening Comprehension
- •Early Reading Skills
- •Reading Comprehension
- •Sentence Composition
- •Word Reading and Pseudoword Decoding
- •Essay Composition
- •Numerical Operations
- •Oral Expression
- •Oral Reading Fluency
- •Spelling
- •Math Fluency—Addition, Subtraction, and Multiplication
- •Introduction to Interpretation
- •Step 1: Interpret the Composite Scores
- •Subtest Floors and Ceilings
- •Skills Analysis
- •Intervention Goal Statements
- •Qualitative Data
- •Using the WIAT-III Across Multiple Administrations
- •Linking Studies
- •Overview of the WISC-V, WISC-V Integrated, and KABC-II
- •Qualitative/Behavioral Analyses of Assessment Results
- •Identification of Specific Learning Disabilities
- •Interpretation and Use of Three New Composite Scores
- •Accommodations for Visual, Hearing, and Motor Impairments
- •Ongoing Research on Gender Differences in Writing and the Utility of Error Analysis
- •Female Advantage in Writing on KTEA-II Brief and Comprehensive Forms
- •Strengths and Weaknesses of the KTEA-3
- •Assets of the KTEA-3
- •Test Development
- •Two Forms
- •Standardization
- •Reliability and Validity
- •Administration and Scoring
- •Interpretation
- •Phonological Processing
- •KTEA-3 Flash Drive
- •Limitations of the KTEA-3
- •Test Development
- •Standardization
- •Reliability and Validity
- •Administration and Scoring
- •Test Items
- •Interpretation
- •Final Comment
- •Strengths and Weaknesses of the WIAT-III
- •Assets of the WIAT-III
- •Test Development
- •Normative Sample
- •Reliability and Validity
- •Administration and Scoring
- •Interpretation
- •Better Listening Comprehension Measure
- •Technical Manual
- •Limitations of the WIAT-III
- •Floor and Ceiling
- •Test Coverage
- •Poor Instructions for Scoring Certain Tasks
- •Item Scoring
- •Audio Recorder
- •Final Comment
- •Content Coverage of the KTEA-3 and WIAT-III
- •Case Report 1: Jenna
- •Reason for Evaluation
- •Background Information
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Test Results
- •Neuropsychological Implications and Diagnostic Impressions
- •Recommendations
- •Psychometric Summary for Jenna
- •Case Report 2: Oscar
- •Reason for Evaluation
- •Background Information
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Test Results
- •Diagnostic Summary
- •Recommendations
- •Resources
- •Psychometric Summary for Oscar
- •Case Report 3: Rob
- •Purpose of the Evaluation
- •History and Background
- •Behavioral Observations
- •Assessment Procedures and Tests Administered
- •Results
- •Summary and Diagnostic Impressions
- •Recommendations
- •Psychometric Summary for Rob
- •Q-interactive Versus Q-global
- •Equivalency Studies
- •Essential Features of Q-interactive
- •Key Terminology
- •Central Website
- •Assess Application
- •References
- •Annotated Bibliography
- •About the Authors
- •About the Digital Resources
- •Index

308 ESSENTIALS OF KTEA™-3 AND WIAT®-III ASSESSMENT
Language Fundamentals–Fourth Edition (Wiig, Semel, & Secord, 2003) Formulated Sentences subtest, which had correlations of .64, .53, and .47 with the KTEA-3 Oral Expression, Written Expression, and Listening Comprehension subtests, respectively. The correlation between the KTEA-3 Oral Language composite and the WIAT-III Oral Language composite was .73 with correlations of .48 between the Listening Comprehension subtests and .56 between the Oral Expression subtests of the KTEA-3 and WIAT-III.
Similarly, the reliabilities are low and the Standard Errors of Measurement high for the age-norm sample for Oral Language (SEm 4.85 to 6.34) and Oral Fluency (SEm 6.99 to 8.81). Therefore, di erences needed for statistical significance between these tests and between these and other tests are large.
Administration and Scoring
Error analysis for Letter & Word Recognition and Nonsense Word Decoding requires phonetic transcription of the examinee’s pronunciations. Unless the examinee reads very rapidly, this is not di cult for examiners with some background in reading instruction or speech pathology, but might be challenging for those lacking this background. One reviewer (RD), for whom this description is valid, has had great di - culty accurately recording the phonemic responses of several students. Thankfully, the Administration Manual (Kaufman & Kaufman with Breaux, 2014c, p. 28) notes, “Recording oral responses verbatim may be111111111 recommended or required on some subtests, depending on the purpose of the evaluation and the depth of information required. Use of an audio recorder during these subtests is highly recommended to ensure accuracy of recording and scoring.” We hope that examiners will record the oral responses even though the need to audio record responses increases the amount of equipment needed for test administration. A pocket-size audio recorder is not much to add to the kit along with a machine to play the audio files.
Weighted raw scores for Reading Comprehension, Written Expression, Listening Comprehension, Word Recognition Fluency, and Oral Expression are taken from conversion tables in Appendix C of the Technical & Interpretive Manual based on the item set(s) administered and the number of items passed within the set(s) and there are also Raw Score Lookup tables in Appendix C for Writing Fluency, Object Naming Fluency, and Letter Naming Fluency. The Subtest Composite Score Computation Form on the KTEA-3 flash drive should make it easy to enter the correct raw scores and weighted raw scores, but examiners new to the concept of item sets sometimes encounter di culties.
We do not find significant di culties with scoring most of the Written Expression and Oral Expression items, but some of our graduate students, workshop attendees, and colleagues report uncertainty in scoring some items, even with the explanations, examples, and Language Glossary in the Scoring Manual (Kaufman & Kaufman with Breaux, 2014d).

STRENGTHS AND WEAKNESSES OF THE KTEA™-3 AND WIAT®-III 309
Test Items
Some potentially ambiguous or confusing items on the KTEA-II were improved in the KTEA-3. However, several of the KTEA-3 Reading Comprehension and Listening Comprehension items list as “incorrect” responses that appear to us to be merely incomplete. Some of these are marked (say, Tell me more), but several are not and querying is permitted only for the responses so marked on the easel. Laconic examinees with adequate reading comprehension might be penalized on those items.
Interpretation
As noted above, under Test Development and in Tables 5.2 and 5.3, examiners must be very cautious when interpreting scores on subtests that limit possible high scores for older examinees and possible low scores for younger ones. Near-perfect raw scores producing only modestly high standard scores and near-zero raw scores with moderately high standard scores must be interpreted with extreme care and may require assessment with another instrument. When an examinee’s skills are extremely weak, criterion referenced assessment may be more useful than norm-based assessment.
If the six Academic Skills Battery (ASB) subtests for older students or the five for kindergarten or three for prekindergarten are not administered, then subtests and other Composites cannot be compared to the ASB. However, subtests and composites can still be statistically compared to each other111111111 .
Appendix G in the Technical & Interpretive Manual lists di erences needed for statistical significance and base rates for di erences between Composites (Tables G.3 and G.7 for grade and age, respectively) and between subtests (Tables G.4 and G.8), which is essential information. However, in Tables G.4 and G.8, it would have been helpful to highlight the two subtests that make up each composite. Most examiners want to know whether the components of a total score are su ciently consistent with one another to permit confident interpretation of that total score. This is another instance in which examiners may want to print out and store copies of tables, which they could highlight manually. We find it helpful to use the blank space in the “Composite” columns on the Subtest & Composite Score Computation Form (which must be printed from the KTEA-3 flash drive for each examinee) to note whether the di erence or di erences between the two or three subtests in each composite are significant and unusual (base rate). We write either “p < .05” or “n.s.” and either “f ≤ 10%” or “f ≥ 10%” in the blank space for each composite.
As noted above, the comparison between the Reading Comprehension and Listening Comprehension subtests is extremely valuable diagnostically because the formats of the two subtests are so similar and the correlation between them is fairly high (e.g., .61 in grade 5), so a di erence of 22 points has a base rate ≤10% in grades 1 through 6. The comparison between Written Expression and Oral Expression is somewhat less helpful because of substantial di erences in test format and a lower