The latest installment in the fine series of reports produced by researchers at nwea analyzing interim assessment results since the onset of COVID suggests that the recovery of learning lost during the response to the pandemic is not going as well as we hoped it would be. News reports featuring words and phrases like “stalled,” “sputtering,” and “losing academic ground” are not what we hoped to see as we begin the fifth school year since March 2020.
The conclusions are based on comparisons of growth (or progress or gains) made by students during the 2023-24 school year with that made by corresponding cohorts of students in the school years prior to the pandemic.
Estimates are that the number of additional months of instruction or “learning” needed for recovery in mathematics is holding steady while the amount of additional time needed in reading is actually increasing. More on those metrics in an upcoming post, but the message is clear. Kids today are not performing the same way now that they were performing consistently just a few years ago.
I fully expect that the results and reports will be similar as more states release their 2023-24 state assessment results over the next few months and NAEP 2024 results are finally released “in late December or early January”; that is, at some reasonable amount of time after all of the popular and electoral college votes have been accounted for, counted, recounted, and certified.
Reasons Why We Should Be Concerned
There are any number of plausible explanations for why the stalled or sputter recovery is real and should be a concern, some of which are student-related. Chronic absenteeism is one potential cause that pops to mind. Issues related to a decline students’ mental health and well-being are also a cause for concern, whether those issues are a result of the pandemic and its aftermath, social media, heightened concerns about climate change and other global issues, or a divided country.
Still focused on students, but at a slightly different level, there is another class of plausible explanations. It’s possible that the difference in achievement can be explained by changes to the pool of students, significant population changes over the past five years that just happened to coincide with the pandemic. Declines that would have looked more gradual over the course of five years exacerbated by the gap in testing and/or the effects of the pandemic. Or it could be that the pool of students is the same, but those students’ living conditions, socioeconomic status, or standard of living has changed since the pandemic in ways that are detrimental to learning and achievement.
Or perhaps the “decline” in performance has more to do with the teachers than the students. We have all seen the reports of teacher shortages and teacher burnout, preexisting conditions that were exacerbated by the pandemic and all of the other reasons cited above.
Of course, there is also the curriculum to consider.
I find it interesting that the latest analyses show kids need more time to catch up in reading than in mathematics. Could that finding be related to the recent hubbub over how to teach reading and the realization that many of us have been doing it wrong for the past few decades, or perhaps it’s the hullabaloo over what kids should and shouldn’t be reading in schools. It cannot be easy when curricular choices are being attacked from the left and the right (for different reasons, of course). Jane, Dick, and Spot have never been so sad and confused.
Any, all, or some combination of the above reasons might be contributing in some way to the decline in achievement, or growth, or learning. All are cause for concern to some extent.
On the Other Hand, Maybe It’s Not As Bad As It Seems
In addition to the reasons cited above that raise concern, there are at least a couple of plausible reasons why the weaker performance picked up by interim and other tests may still be very real but may not be as much of a concern.
The first is simply prioritization. In most respects, education is a zero-sum game, the notable exception being finance, of course, which operates as a zero-sum game at the local level, and even state level, but is anything but zero-sum when the federal government decides to print more money earmarked for PK-12 education as it did the past few years. Truth be told, these intermittent and temporary infusions of cash with conditions probably do more harm than good in the long run, but that’s a topic for another day.
In virtually all other respects, however, there is only so much time, space, energy, and human capital to go around. Education requires choices and tradeoffs. Since NCLB, some ill-conceived choices involved trading recess, physical education, the arts, and humanity for more time devoted to instruction in reading and mathematics. At this point in time, schools may be reprioritizing. Perceived progress in reading and mathematics may suffer a bit in favor of gains in other areas deemed higher priorities at the moment.
Or it may be that schools have changed what they teach and how they teach reading and mathematics in ways that we’re not capturing with our tests. As the pandemic struck, there was some evidence that schools were finally starting to get the hang of the Common Core State Standards (or whatever they happened to be called in their state) and perhaps even paying more attention to those 21st century skills that have been promoted for nearly a half century. Our on-demand, end-of-year tests are not designed to pick up on those skills.
OK, perhaps I went a little too far there with regard to the CCSS and 21st century skills.
It may still be true, however, that the way that schools teach reading and mathematics, what they choose to include in the curriculum, and how they ask students to demonstrate their proficiency (or mastery or competency) has changed.
Mathematics has been constantly changing at least since I was in high school, and calculators began to replace slide rules. I could devote an entire blog post to skills and techniques that we were taught in mathematics that have been made obsolete by advances in technology. Kids today don’t know what a slide rule is for – what a wonderful world.
Our interim and state tests don’t ask kids to use slide rules, but neither do they directly measure the “construct” or skill that we are interested in measuring. The combination of the intense focus on alignment to individual standards, the expansion of assessed standards to include the breadth or range of what happens between the end of two grade levels, and the current trend to break the “whole assessment” down into small units administered throughout the year together have led to what we might refer to as tests more focused on the parts than the whole – a whole which invariably is greater and different than the sum of those parts, whether those parts are individual test items or individual testlets.
The simple fact is that with scores on interim or end-of-year tests we are almost always making inferences about the whole based on our understanding of the relationship of parts to that whole. To the extent that relationship changes, we may be missing something important about student performance.
Daunting.
Reading, however, poses even greater challenges.
Although approach and processes have changed in mathematics, the outcome or end result generally is still the same. Find the value of x. Determine the minimum point on the graph. Solve the word problem. Choose the best path, better option, …
Not so much in reading.
We thought that the literary to informational shift in the CCSS was radical, but that was a blip on the radar compared to what has occurred since 2010. What we call “reading” or “English language arts” is ultimately about communication and the ways that all of us, including, and perhaps particularly, students, receive and transmit information, that is communicate, is so vastly different now than it was 5, 10, or 15 years ago.
In our effort to equate tests across years, maintain trends, and look back to life and achievement before the pandemic, we may be missing real and significant changes in the “construct” that have occurred since the spring of 2019 or even 2015. Those changes might partially explain the drop in reading test scores that started before the pandemic as well as explain why reading, which emerged from the initial onslaught of the pandemic in better shape than mathematics, appears to be struggling today.
Just a thought.
Should We Be Worried?
Of course we should be worried. It’s who we are and what we do.
Sollicitus sum, ergo sum.
How weird it would be to run around joyful all the time. Right?
Anyway, the takeaway today, as it always has been, is that we need more information than a test score can possibly convey to understand how much we should be worried and what we should be worried about.
The latest round of interim test scores, along with the state test scores, and ultimately the NAEP 2024 scores will tell us what student achievement looks like at a particular point in time. There will even be analyses conducted that attempt to better describe or explain those results, although we’ve never been really good at differentiating between descriptions or explanations or at deciding what to do with them.
What those test scores alone won’t and can’t do, however, is to tell us the extent to which any or all of the reasons cited above have contributed to student performance; and my best guess is that all have contributed.
We need to dig deeper than a test and a test score to answer that question.
When we make that effort, we will be in a better understand our current situation and will put policymakers in a better position to determine how much effort we should be devoting to recovery and how much we should be devoting to transformation.
It might even give us a step up on AI in determining what the future of assessment and testing should look like.
What a wonderful world that would be.
Image by Gerd Altmann from Pixabay