Pushing Through on Through-Year

Through-year assessment.

Is it an idea whose time has come or a bad idea that just won’t go away?

Is through-year assessment the best thing since sliced bread?

Or is through-year assessment simply the back-up quarterback; that is, the next best thing that isn’t what we’re doing now.

As any football fan knows, unless your team has someone like Patrick Mahomes or Joe Burrow as quarterback, the back-up quarterback is often the most popular player on the team. The one player who offers hope that things could be better. The one player who is different from what you are doing now. The one player who hasn’t let you down – yet. The player with the clean uniform who, until he steps on the field, is 100% promise and potential.

Ninety-nine times out of one hundred, there is a good reason the backup is on the bench.

But sometimes the backup quarterback is Tom Brady.

A Cautionary Tale

My former colleagues at the Center for Assessment recently published a brief outlining ten key considerations for states thinking about adopting or developing a through-year assessment program. The Center has promoted the document as a “state-of-the-field paper on through-year assessment,” but their effort might also be called a cautionary tale.

The Center points out that little research has been conducted to show that through-year assessment can fulfill its promise or its potential. They warn that the although the grass may look greener on the through-year side of the fence, when you start layering through-year assessment with all of the logistical constraints and technical requirements associated with state testing for accountability, you may find the same weeds, moss, and crabgrass that you associate with current state testing. And it will cost much more to control those nuisances than it does now because with through-year assessment you need that lawn to be lush and green year-round.

As a good steward of state testing should, the Center has spent the better part of the last quarter century warning states of the pitfalls and pratfalls associated with changes and proposed “innovations” to high-stakes, large-scale assessment programs and accountability systems. Through the unceasing onslaught of NCLB, the Obama Waivers, interim assessments, Race to the Top, teacher evaluation, the Common Core consortia, ESSA, IADA, and the restart from the pandemic, Center associates have attempted to maintain their balance while pushing that assessment rock up the hill in an effort to improve and support student learning.


But despite the truth in all that they say about through-year assessment, the Center reports that at least a dozen states, including some that the Center advises, are actively pursuing the use of a through-year test for their state assessment program.

One can only wonder why. Why? Why? Why? Why? Why?

Why Do States and Local Educators Want Through-Year Assessment?

That’s a good question, and one that we must continue to ask until we fully understand the reasons why?

I’ll pass over the cynical answer that the through-year demand is the product of slick marketing and heavy lobbying by the assessment arm of the country’s education-industrial complex.

But I will accept the still cynical, but accurate, answer that states want through-year assessment because local educators want through-year assessment, so we can set the state part of the question aside.

That leaves us asking why local educators want through-year assessment.

If we ask local educators the question one time, we will likely hear that through-year assessment will provide more instructionally useful information.

Ask again and we may hear that through-year assessment will reduce testing time (iff, through-year assessment means replacing the end-of-year state test with the interim assessment that is already being administered).

What we need to do, however, is to dig down deeper.

Attending a recent webinar on Negotiating For Equitable Futures, I heard Dr. Sarah Federman apply the Five Why’s technique, not to discovering the root cause of a problem, but to uncovering the underlying reason, or reasons, people are asking for something (e.g., a raise, a new car, a better phone).


Ask local educators why they want more instructionally useful information. Ask why they want to reduce testing time.  Ask why they want to be able to monitor student progress throughout the year.

My best guess is that if we continue to ask why, we will all eventually reach the conclusion that what local educators want most is the sense of agency that was taken from them by the assessment demands and accountability requirements of NCLB.

Through-year assessment is a first step in that direction.

Helping Teachers to Know What Students Know

As assessment specialists, our primary role in the education ecosystem is to help educators fulfill theirs. Artificial Intelligence may someday reach the point of matching teachers’ ability to know what students know, but psychometricians and assessment specialists never will.  We are the decomposers in education’s food chain, processing residue and returning information to complete the instructional cycle.

And that’s OK. If we wanted to play more than a supporting role in student learning, we would have stayed in the classroom.

We fulfill our role when we provide useful information.

I could argue, as some have, that through-year assessment is more useful than end-of-year testing because it provides more information, more frequently, and information that is more timely.  That may or may not be true.

For me, the more important point is that through-year assessment offers the potential to produce better, more valid, information than we will ever again be able to provide through our current end-of-year tests.

Some of my former colleagues like to claim that state tests are better than they’ve ever been. The tests are better aligned to standards. New item types better integrate reading and writing, assess problem solving, and measure higher order cognitive skills. Greater attention to bias and sensitivity issues has improved fairness. Accommodations and alternate assessments have made state test more inclusive and accessible. Adaptive testing is better. Scoring is better.

All of that is true.

By refusing to break free of the on demand, end-of-year box, however, we have boxed ourselves in.  We are providing educators, and policymakers, with much less useful information than we could be and should be.


Our current state tests are dogs

I do not intend that heading to be as pejorative as it sounds.

Since the early 1990s we have known that an on-demand, end-of-year test cannot fully or adequately measure what we want to measure on state tests. They can only get us part of the way.

Current state tests can tell us that we are measuring a small animal. It has four legs and a tail. It is soft and seems to be domesticated. We know it’s not a bear and we can be very confident it’s not a mouse. And that level of information might be good enough for some situations. We know that we don’t need to set traps (humane traps, of course).

In other situations, we need more information. We need to know whether the animal that we are measuring is a dog or a cat. To make that determination we may need to listen to it, see whether it likes to roll in the mud, or observe how often it ignores us over an extended period of time. In other words, we need to step outside the box of end-of-year testing and look more closely at the cat (or dog).


For years, I was able to convince myself that policymakers and parents were the primary audience for state test results and that all they needed to know was that a school was not infested by rodents or being threatened by wild animals. We could provide that level of information with an on-demand test and selected-response items.

I no longer believe, however, that on-demand, end-of-year tests alone are able to adequately (i.e., validly, reliably, fairly) measure the knowledge, skills, and behaviors reflected in current state content and achievement standards – even to the level of making proficiency classifications for parents and policymakers.

The on-demand, end-of-year test is no longer a viable option.

Through-year assessment is a step in the right direction.

Fortune Favors the Bold

The transition to through-year assessment will not be monotonic over time or unidimensional in nature. That is, there will be ups and downs as we consider and simultaneously contend with multiple factors along the way.

But it is a transition that we have to make, sooner rather than later. States that are willing to step up and take the lead need to be supported and encouraged, not collared by restrictive programs like IADA.

If through-year assessment simply meant replacing current state tests with current interim assessments, that would not be a bold step, but it would be a step. But the good news is that we are already seeing through-year applications being developed in Louisiana and Georgia that go well beyond the interim assessment framework. We must continue to push the limits of our thinking on what can be accomplished via through-year assessment.


There will be mistakes made along the way.  

As they say, anyone who has never made a mistake has never tried anything new, and the biggest mistake of all is to do nothing at all.

But the presence of mistakes doesn’t mean that there has to be harm.

If you’re worried about identifying the bottom 5% of schools during a transition, let me ease your mind, and save you a lot of time and money.

Take the bottom 10% of schools from the previous year, or the year before that, or the year before that (pandemic notwithstanding). Draw a random sample of half of them. There’s your 5%. Then provide targeted and comprehensive support to the entire 10% because they all need it.

The result of the process above will be at least as accurate as anything that we have ever done with annual state tests and accountability systems. That’s not a criticism of state tests or accountability systems, it’s just reality.

Through-year assessment done well may well require us to rely more on the judgments of local educators.

I know that thought scares us.

But remember that our concern has not been that teachers do not know whether a student is Proficient or do not understand what it means for a student to demonstrate proficiency.  We would refer to that situation as a “knowledge gap” or an “awareness gap” or perhaps just plain ignorance of the standards. Instead, the claim has been that there is an “honesty gap” in the reporting of student proficiency by local educators. That’s a horse of a different color.

We know how to prevent honesty gaps, and we know how to minimize harm during transitions.

All of the ten key considerations and issues that the Center for Assessment describes as the state-of-the-field of through-year assessment are real. I am confident, however, that states and their assessment partners can and will address them. We don’t have a choice.

As in the past, the states and assessment partners that lead the way may not be the ones that we expect. That’s how innovation often works.

None of this will happen, however, until we boldly, but not recklessly, push through the walls and make our way out of the box that is on-demand, end-of-year testing.

Header mage by Gerd Altmann from Pixabay

Other images, in order of appearance, by Elias ,  OpenClipart-Vectors , No-longer-here , and  Junah Rosales from Pixabay

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

%d bloggers like this: