Is that all there is?

One of the benefits of giving up a late Sunday afternoon to travel to the site of my Monday morning meeting is the opportunity to leisurely read the newest edition of the Late Late Bell from the Fordham Institute.  Last Sunday, as the Amtrak Northeast Regional rumbled toward Providence, I read the following in the middle of a Kevin Mahnken post on the Clintons and testing:

And moving from annual testing back to the grade-span assessments of old … would be a huge mistake; conducting assessments every year is the only way to regularly measure student growth, and it prevents administrators from stashing their worst teachers in grades they know won’t be tested.

As I read those lines, the song ‘Is that all there is?’ (a hit by Peggy Lee in the late 1960s) immediately popped into my head.  Is our best argument for annual testing that it is the only way to measure student growth and that it prevents administrators from hiding bad teachers?  Is that why I am riding to Providence on a beautiful spring Sunday?  Is that why I have worked in this field for the last 27 years?

Is that all there is, is that all there is
If that’s all there is my friends, then let’s keep dancing
Let’s break out the booze and have a ball
If that’s all there is

Monitoring individual students’ progress longitudinally throughout their K-12 (or PK-16) experience is a good thing.  But annual administration of state assessments is not the only way to do that, Is the information provided by growth scores enough to justify annual testing?  Additionally, do we really need annual testing to regularly “measure growth” (italics added to indicate sarcasm)?  How much less would we know about an individual student’s or a school’s growth if we tested every other year? For example, imagine testing students in grades 3, 5, 7, 9, and 11 in English language arts and in grades 4, 6, 8, and 10 for mathematics.   Would it be impossible to compute a growth score? Would support programs and interventions for individual students change? Would policymakers have to think more carefully about how to attribute student performance to individual teachers – and would that be a bad thing?  Would schools stop teaching English or mathematics in the grade levels without an assessment?  I hope that parents, school boards, and even the state would be able to detect that fairly quickly.

Which brings us to the second reason offered for the necessity of annual testing  – “that it prevents administrators from stashing their worst teachers in grades they know won’t be tested.”  Yes, we have all have a friend whose spouse has cousin who teaches in a school where that happens.  So, it must be true, right?  Let’s accept for a moment that this practice really was occurring on a widespread basis with grade span testing and play out the scenario.  That still leaves us with two possible outcomes:

Outcome A: Students tested at the end of the grade span meet expectations.

If students can meet expectations at the critical, benchmark grade then a) it doesn’t really matter that the school is hiding less effective teachers at other grade levels, b) performance expectations at the benchmark grades are too low, or c) the accountability system allows schools to meet expectations while groups of students with less effective teachers are performing poorly.

Outcome B:  Students tested at the end of the grade span do not meet expectations.

If students do not meet expectations at the end of the grade span then hiding teachers at the non-tested grades was not an effective strategy, and the administrator will have to try a different approach.

Over the long term, with well-designed assessment and accountability programs, I expect that Outcome B will be more likely than Outcome A; and I hope that policies such as annual testing of all students is based on a long-term theory of action and not a quick-fix solution to thwart bad administrators.

To be fair, the Mahnken post does link to a longer list of the benefits of testing kids every year offered by Andy Smarik

  • It makes clear that every student matters.
  • It makes clear that the standards associated with every tested grade and subject matter.
  • It forces us to continuously track all students, preventing our claiming surprise when scores are below expectations.
  • It gives us the information needed to tailor interventions to the grades, subjects, and students in need.
  • It gives families the information needed to make the case for necessary changes.
  • It enables us to calculate student achievement growth, so schools and educators get credit for progress.
  • It forces us to acknowledge that achievement gaps exist, persist, and grow over time.
  • It prevents schools and districts from “hiding” less effective educators and programs in untested grades.

 

I could devote an entire post to each of those purported benefits of annual testing, but for now I’ll make just a few comments on the major themes.  With regard to forcing us to acknowledge that achievement gaps exist, persist, and grow over time, how many times do we have to test all kids at every grade level to make that point clear?  With regard to arming families with information, one could make the case that school and subgroup results are much more valuable weapons to bring into an argument  for making necessary changes.  It is much too easy for teachers and administrators to dismiss or explain away the performance of an individual student.  Finally, if we have to make clear to professional educators or policymakers that every student and every grade level matter then the problem we have is not one that can be solved (or even lessened) by annual testing.

Again, if that’s all there is, then let’s keep dancing

My goal with this post is not to call for an end to annual testing. I am not a strong proponent of annual testing, but I can live with it as long as it doesn’t run amok.  Of course, one man’s definition of running amok is another person’s criteria for a high-quality assessment.  After more than a decade of annual testing, however, there should be enough hard data to evaluate the pros and cons of annual testing.  There could be quantitative evaluations and qualitative evaluations.  I would even be willing to accept some econometric analyses of the impact of annual testing.   I will go out on a limb and say that my educated guess is that the findings of such evaluations would not be black-and-white.  They would probably find that annual testing is more beneficial for some students and in some cases than it is for/in others.  Those evaluations would probably also find that there is a curvilinear relationship between the length of a test and its effectiveness as either a measurement instrument or an accountability tool.  In short, those analyses are likely to say that this is a complex question and that there needs to be serious, thoughtful discussion about the purposes of state testing; and how much and what type of state testing, if any, is needed on an annual basis.

On my next Sunday afternoon train ride, I will read through ESSA again and look for the place where those evaluations are described and funded (and no, it’s not the section on assessment audits). For now, I ‘ll just keep dancing.

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

%d bloggers like this: