Perhaps it’s due to a disconnect between what’s assessed on the state test and what goes on in the school.
Perhaps it’s just perspective. One group looking down at the state test results and the other group looking up.
Whatever the reason, more than two decades into the NCLB era of assessment and accountability, it seems that it is not unusual for states and schools to look at the same set of state test results and see two very different things. To look at the same reports and come away with very different conclusions about school performance and what should be done about it.
Perhaps I shouldn’t be surprised.
Even if each state assessment report were a masterpiece, beauty and meaning are in the eye of the beholder. Two people can look at the same work of art and come away with very different impressions and feelings. Of course, as much as we would like them to be, state assessment reports are rarely a masterpiece.
If we consider them a work of art, state assessment reports may function like those autostereograms (aka Magic Eye pictures) that were all the rage in the 1990s when I was just starting out in large-scale testing at Advanced Systems, before it became Measured Progress, which ultimately became Cognia. Stare long enough and just the right way at the print and you just might see the Statue of Liberty, or a rocket ship, or a shark. Stare long enough and just the right way at a state assessment report and …
Or maybe state assessment reports are like one of those optical illusion drawings. Once you’ve looked at it and seen the old lady, it’s very difficult to see the young woman. No matter how hard some people try, they either see the vase or they see the people but not both.
Or perhaps as I suggested above, it’s just a matter of perspective. Local educators may be too deep in the weeds of the daily life of the school to focus on the forest beyond the tree that needs their immediate attention. State department staff and policymakers like astronauts viewing earth from space, see everything at once, but are viewing the situation from too great a distance to see and understand the details that lie beneath.
That makes sense to me. Of course, states and schools view state assessment reports differently. But surely, they would reach a shared perspective when they sit down and communicate with each other about what they see.
Therein lies the problem.
More than two decades into the NCLB era of assessment and accountability, there remains a stunning lack of communication between states and schools (or districts) around state assessment results.
Perhaps it’s due to each annual state test administration being treated as a one-off, stand-alone event rather than as part of a coherent, continuous school improvement effort.
Perhaps both sides are just too caught up in the here and now to pause and look back. When state assessment reports are released, testing folks are deep into preparation for the next test administration. Local educators are ready to kick off a new school year with its own set of challenges.
Perhaps it’s a cultural thing. As a society, we just keep moving forward without taking time to reflect. Like CBS airing a commercial for the 2024 Super Bowl the day after this year’s game or ESPN already publishing way-too-early power rankings for next football season.
Perhaps it’s because everyone lacks a sense of agency with regard to state assessment and state test results. No, I don’t really understand what sense of agency means, but it sounds like a thing you should have but probably don’t have when a state and federal agencies are involved.
But whatever the reason, rarely do states and districts sit down and have a serious conversation around the most basic of all questions regarding test results: Do you agree with them?
Do the state test results accurately reflect your understanding of school performance in English language arts and mathematics?
Or as expressed in the title of this post, do I see what you see?
I intentionally refer to school and not student performance because it is overall group-level performance that I am concerned about here, and don’t want to be distracted by the idiosyncratic performance of individual students whose performance on a given test on a given day may be influenced by any number of relevant and irrelevant factors.
I also intentionally limited the question to agreement with the results on the state test; that is, performance in the tested subject areas. Agreement with ratings of school effectiveness or quality based on how test results are used in an accountability system is an entirely different question.
And I flipped the title question from the more commonplace, “Do you see what I see?” to “Do I see what you see?” for a reason. It may seem like a minor change in wording, but it’s important when engaging in conversation with educators about state test results for those of us on the state side of the table (e.g., state department staff, policymakers, assessment specialists) to possess and convey a healthy sense of skepticism.
It’s important to remember that the real world that we are trying to model and the real achievement that we are trying to measure via our state assessment program sits on the other side of the table.
There are exceptions, of course, to the lack of communication I describe. Sometimes unsolicited feedback and informal conversations do break through and make a difference. But often those exceptions involve exceptional people.
I recall the persistent district math specialist whose concerns about stagnant math scores were ultimately proved right by the discovery that the equating method, as applied, was not adequately capturing growth across years.
And there was the Milken Award winning assistant principal who joined us in conducting state assessment workshops; sharing with fellow administrators across the state his data wall discovery that most of his tenth-grade students performing at the lowest achievement level were enrolled in a two-year Algebra I program; that is, those students were not even close to be encountering all of the Algebra I and Geometry standards assessed on the state’s tenth grade mathematics test. Then he shared what he and his faculty did about it.
Of course, there are assessment committees that bring together a handful of state and school content specialists, and there is the occasional school administrator who serves on a state assessment TAC.
But much more is needed.
The type and level of interaction between state, assessment and school folks is not sufficient to validate the design and results of current state assessment programs.
And if we are really serious about re-imagining assessment, we have to do better than simply imagining what goes on in schools and classrooms.
We have to really understand what proficient, or college-and-career ready, students know and are able to do.
We have to see and understand the various ways that students can manifest mastery of the state standards or proficiency in an assessed content area and be prepared to capture that information reliably, fairly, and efficiently.
In short, when designing and validating assessments, we have to be willing to engage educators and students and ask the question, “Do I see what you see?”
Header image by PIRO from Pixabay
Additional images also from Pixabay
You must be logged in to post a comment.