We all know that large scale state testing is bound by constraints. Some might go so far as to argue that state testing is defined by these constraints. Time, cost, security, well-intentioned but ill-conceived federal regulations, and outdated peer review expectations all tightly shape the who, what, when, where, why, and how of state testing.
How do we move beyond those constraints? Can we reimagine and reinvent state assessment programs that are not bound by the same concerns of time, cost, and security?
I have always been a strong believer in the Disney maxim “If we can dream it, we can do it.”
And I still believe, but to paraphrase Bill Clinton, it depends upon what the meaning of the word “it” is and whether we mean “it” when we say it.
It also depends upon who “we” are – literally and figuratively.
That’s A Right Pretty Box You’ve Got There
One of the biggest mistakes we make in reimagining and reinventing state assessment is expecting a box maker to be able to think outside of the box. And by box maker I am referring to both test contractors and state departments of education.
The simple fact is that box makers make boxes.
Oh, they might make big boxes and small boxes, lightweight boxes and heavy-duty boxes, custom boxes with your assessment program logo printed on them, and perhaps even offer a build-your-own box home kit. They may rebrand their box as a “repository” or a “vessel of student learning”. In the end, however, they will still produce a box. A system does what it is designed to do.
In the 1990s, it wasn’t CTB or Riverside or Harcourt that led the way in integrating constructed-response items into state testing. It was Advanced Systems. In the 2010s, it wasn’t Advanced Systems (by then Measured Progress and now Cognia) on the main stage as the field shifted from paper-based testing to computer-based testing and computer adaptive testing. That platform was occupied by companies like Pearson and AIR Assessment (now Cambium Assessment – “the leader in online testing“).
Can a company reinvent itself and be a leader across generations of assessment innovation? Sure, it’s possible. Is it easy or likely? No, doesn’t seem so.
What company or companies will turn our reimagined state assessment programs dreams into reality? That’s still to be determined and depends in large part on where our imaginations take us.
If we can dream it, we can do it.
But I wonder, who will that be?
Can we – states and others entrenched in state testing – dream it?
I have been involved in several attempts by states to develop a performance assessment as part of their assessment program or to incorporate performance events into their current state tests. As I wrote in a January 2021 post, every one of those projects went something like this:
In short, the project begins with high expectations; as large-scale constraints are considered and applied, the performance assessment becomes smaller in scope and much less appealing; until ultimately, the notion of an actual performance assessment is abandoned in favor of sticking a “performance task” label on a gussied-up passage-based writing task or stimulus-based testlet.
State assessment directors and their teams, like test contractors, do what they are designed to do. They effectively design and manage external, on-demand, large-scale, typically end-of-year tests. Thinking within that box is not a flaw or a bug, it’s a feature – a critical feature in their current positions.
The system does what it’s designed to do.
But what if the state decides it’s time to break out of the box. What if they decide it’s time for a new system?
Step 1 – Order Matters
Even if you have the best intentions and the “right people” at the table, order matters when beginning a discussion about re-imagining or even just renovating a state testing program.
The following example from TAC meetings serves as a cautionary tale. [Note: I use gender-specific pronouns in what follows because I am picturing a specific meeting and specific person as a write this. The principle, however, is generalizable.]
I cannot count the number of TAC meetings I attended in which a promising discussion was brought to a halt before it even began by that one TAC member. You all know him, been on TACs with him. Whenever the agenda topic addresses potential changes to the assessment program, his initial, and often only, contribution to the discussion is the question, “What are your constraints?” He then smiles politely, sits back, closes his eyes, and we all listen for the next 15-20 minutes as the state describes in great detail all of the constraints they must consider. By the time the state is finished listing constraints there is simply nowhere left to go with the discussion.
As a TAC facilitator, I learned never to call on him first. As each constraint is listed you can feel the air being sucked out of the room and out of the agenda topic.
For whatever reason, it is a completely different experience when the first question asked is, “What do you hope to accomplish?” or “Why do you want to make a change to the program?”
Asking those questions first doesn’t change the realities of the situation. The state’s answers to those questions do not eliminate, or even lessen, the constraints the state is facing. What asking them first does, however, is change the tenor of the discussion.
The conversation shifts from a brainstorming session that produces a list all of the things that the state cannot do given those constraints to a discussion in which TAC members offer expert opinions on alternative approaches the state might take to be able to realize the one or two key changes or outcomes they desire – even with all of their constraints.
Slip The Surly Bonds
Whenever I dream about the future of state assessment, my mind inevitably returns to the poem High Flight, which I first encountered in a high school English class. What will it look like when state assessment slips the surly bonds of the constraints that keep it securely contained within its standardized, end-of-year box? What will cause us to say that we have “danced the skies on laughter-silvered wings” or “topped the windswept heights with easy grace where never lark, or even eagle flew”?
For me, after three decades, the answer now is obvious. We will only reach those heights when we release state assessment from the constraints of state testing.
State tests will always have constraints, and the external constraints listed at the top of this post pale in comparison to the internal limitations associated with drawing inferences about individual student performance from a single test score (or even from 3-4 test scores collected throughout the year).
We have to break free from thinking that is a bad thing. It is what it is and what it will be.
There is no longer a need, however, for state assessment programs to be defined by the constraints and limitations associated with state tests. As I have written previously on several occasions, at one time a state test was arguably the most effective and efficient method of implementing a state assessment program – understandably contributing to the conflating of the two.
Technology and state standards (content and achievement), however, have made it not only feasible, but necessary, to expand our view of state assessment – to realize that a state test is only one of many tools in our state assessment toolbox.
More on what the future of state testing, state assessment, and perhaps even state accountability systems might look like in future posts.
You must be logged in to post a comment.