It’s January. Can Johnny Read?

While we spent the fall waiting for PARCC to set performance standards, NAGB to release NAEP scores, and Congress to reauthorize ESEA, students and teachers across the country were going to school day after day, week after week, month after month.  Now it’s the middle of January and half of the school year is complete.  Can Johnny read?  Can Jane do math?  Can their teacher answer those questions?

I hope that the answer to each of those questions is yes.  I also hope that high-quality assessment aligned to instruction has both informed the teacher’s instruction throughout the year and made it possible for her to know how Jane, Johnny and the rest of her students are performing in the middle of January.  Ideally, she knows which standards they have mastered, the areas in which their performance is strong, and the areas in which they need additional support.  She knows how well-prepared her students are for the lessons that begin next week and whether they are on pace to achieve proficiency on grade-level standards by the end of the year.  One thing of which I am certain, however, is that none of the information she needs in the middle of January came from a state assessment administered at the end of the school year.

For all of the rhetoric about the need for high-quality state assessments that provide timely data to inform instruction, state assessments are not going to provide the information that is needed to inform instruction on a daily basis.

First, there is the obvious timing issue.  The 2015 state assessment was administered eight months ago.  The 2016 state assessment is still a couple of months away.  It should go without saying, of course, that neither of those assessments is going to provide information to help the teacher answer questions about how well Johnny and Jane are performing in the middle of January 2016.

A little less obvious is the fact that state assessments, by their very nature, are not designed to provide detailed information about an individual student’s performance.  Well-designed state assessments can provide accurate and fairly precise estimates of a student’s overall proficiency.  That is their primary purpose with respect to student results, and they can do that well.  State assessments cannot, however, yield accurate and precise information about which standards a student has mastered or identify the particular knowledge and skills on which a student needs additional support.  An on-demand assessment designed to assess the entire set of content standards in a relatively short period of time cannot provide that level of information about an individual student’s performance.

Therefore, let one of our New Year’s resolutions for 2016 be to stop claiming (or allowing others to claim) that state assessments can provide information that they cannot possibly provide.  Equally important, we must resolve in 2016 to not let people evaluate state assessments against criteria which those assessments cannot possibly meet and should not be expected to meet.  If we can remove that distraction from the education reform conversation, then perhaps 2016 can be the year when the field is forced to come to grips with what is needed to ensure that teachers in rural Maine, downtown San Antonio, and suburban Seattle know how well Johnny and Jane can read and do math in the middle of January, at the end of March, or at any other point during the school year.

Changing the conversation will not be easy

Even when the conversation moves beyond improving the state assessment, it still tends to get bogged down in distinctions among summative, interim, and formative assessment – distinctions which are largely artificial and irrelevant to the big picture issues of teachers’ ability to evaluate student performance and to implement effective instruction to improve student performance. It is much easier to discuss the strengths and weaknesses of assessments than it is to discuss the underlying changes that may be needed to improve student achievement; and yes, some of those changes may involve factors that are beyond the control of teachers and schools as is often claimed.

As a starting point, it is necessary to untangle three related, but separate, aspects of criticisms of information provided by assessments:

  • Do teachers have enough information to evaluate students’ current performance?
  • If so, do teachers know the next steps to take to improve student performance?
  • If so, do the available resources and current conditions support teachers’ and students’ efforts to improve student performance?

The answer to the first two questions may identify a need to improve assessments, but almost certainly will highlight the need to improve teachers’ assessment literacy, defined as the ability to know which information to gather, to effectively and efficiently gather that information, to evaluate the information gathered, and to make sound instructional decisions based on that information.  Building better assessments and assessment reports may be a necessary part of the solution, but alone it is not a sufficient approach to improving teachers’ assessment literacy.  That requires a commitment to change teachers’ pre-service preparation, in-service professional development, and professional interactions (e.g., common planning time, professional learning communities, instructional coaching)  The third question strikes at the core of much of the debate about assessment- and accountability-driven reform efforts, but it is lost (or hidden) in arguments about the quality and use of assessments.

Yes, assessments can be improved.  Yes, teachers need to develop a basic understanding of fundamental assessment and measurement concepts.  However, those issues are just the tip of the iceberg when it comes to enhancing the likelihood that teachers can answer the question, ‘Can Johnny read?’ with greater accuracy and precision than last year’s state assessment.

Refocusing on the appropriate role for state assessments

A pleasant side effect of an increased awareness of what state assessments are not designed to do is that it will allow for a realistic discussion of what state assessments can do and what their role should be in education reform, or school, teacher, and student accountability.

We have established that I should not look at last year’s state assessment results to answer questions about Johnny’s current reading performance any more than I should look at measurements from last spring’s annual pediatrician’s visit to determine his current height and weight.  That does not mean, however, that the state assessment (like the doctor’s visit) does not provide valuable information about Johnny.  As President Bush and Secretary Duncan argued for years, state assessment results provide parents and students with an independent, external indicator of a student’s performance.  One can argue whether such an indicator is needed every year in reading and mathematics, but it is certainly useful information to have a certain intervals throughout a child’s K-12 education.

External comparisons are also useful at the school and district level.  Although a school or district should be able to evaluate the effectiveness of their curriculum or a new instructional program based on internal indicators, the information provided by an external state assessment helps them to measure performance and calibrate expectations in relation to the state or other districts.  Such comparisons are particularly important at a time such as this when districts are still in the midst of implementing new state content standards and states are establishing new achievement expectations.

In the long run, a better understanding of the strengths and limitations of state assessments should also lead to a restoration of the balance in the social contract between state and schools (and parents) regarding the length, costs, and uses of state assessments.  Realistic expectations about the role of state assessments should result in a successful resolution to calls for shorter and less expensive state assessments based on an understanding of the limitations in the information that those assessment can provide.

Can Johnny read?  Look to the state assessment for a good estimate of Johnny’s reading proficiency at the end of the year.  Look elsewhere for information to inform instruction throughout the year.

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

%d bloggers like this: