Clowns to the left of me!
Jokers to the right!
Here I am stuck in the middle with you.
Read the headlines, listen to podcasts, or chat in line while waiting to show your vaccine card and you might come away with the impression that everything in education revolves around assessment.
The root of everything that is working right in education and everything that is wrong about education, especially everything going wrong, can be traced back to assessment.
All of the problems in education can be solved by more assessment, or less assessment, or faster, better, and cheaper assessment, or by more personalized, culturally responsive assessment.
We are the center of the universe, the source of all energy, the Α and the Ω.
We are King of the World!
I’ve got the feeling that something ain’t right.
If that description makes you just a bit uncomfortable, good.
If it doesn’t, then please step away from your blog, take a nice, long sabbatical from the university, or step down as CEO of the testing company you run.
Things haven’t felt right quite right in assessment for a while. That uneasy feeling applies whether the topic is formative assessment, interim assessment, large-scale assessment, or the interactions among them. There has simply been too much importance placed on assessment and its role in the daily functioning of education systems. More importantly, too much has been assumed about the role that assessment can play in reforming those systems.
Shining a spotlight for too long on only one part of the stage gives the audience a distorted view of the set and gives the person in the spotlight a distorted view of their role. The same is true for education reform and educational assessment.
It can feel nice to be the center of attention every now and then – for a short time. But that spotlight gets pretty hot when it’s focused on you all the time. Heavy is the head that wears the crown.
Losing control and running all over the place.
The fact is that those of us involved in assessment have no more control over education reform than Jack had over the Titanic when he proclaimed himself King of the World. And like Jack, we bear no responsibility for the sinking of the good ship Education Reform.
To continue with the Titanic analogy, some may view educational assessment, particularly large-scale state testing, as the engine of the ship or perhaps its lifeboats. I view assessment more as the food served to the passengers – vital to the voyage, but not to ensuring that the ship made it safely from Southampton to New York City. A bit more cynical perspective might view state testing as the band that kept playing until the bitter end. Another perspective might view state testing as the locked passageway gates shown in the movie that had the unintended consequence of preventing third-class passengers from escaping to safety.
In any event, for the past 20 years we have allowed other people to assume control of not only the assessment agenda (which may be appropriate), but also key design and development decisions.
- They say raise (or lower) achievement standards, we say how high.
- Stand up a new assessment program in 12 months. No problem.
- Report student subscores. Sure.
- Offer universal accommodations and tools, extended testing time, extended test windows, DIY tests built from an item bank, …
Trying to make some sense of it all
But I see it makes no sense at all.
All of that, however, pales in comparison to the deleterious effect on testing of allowing the feds to dictate assessment design.
Recently, many in the measurement community have decried Adequate Yearly Progress (AYP) and other uses of large-scale testing introduced by NCLB. Make no mistake, AYP was fatally flawed from the start (which is much worse than fundamentally flawed), but its flaws had little, if anything, to do with assessment. As I wrote in a blog post last fall, AYP would not have sucked any less without test scores.
The real blow to assessment was found in the Race to the Top/NCLB Waiver era requirements that state tests measure the full breadth and depth of the state standards and identify where each and every student falls along the proficiency continuum. Those requirements were ill-conceived for state testing and a death knell for any assessment program that attempted to meet them faithfully.
I don’t think that I can take anymore.
So, now we have reached the point where just about everyone inside and outside of the field is questioning just about everything about state testing, testing for college and school admissions, the usefulness of interim assessment, grading, homework; that is, everything associated with the evaluation of student performance.
Critical questioning, in general, is a good thing and is necessary for growth and constructive change.
Too much of a good thing, however, can still be harmful. Everything in moderation.
And I’m wondering what it is I should do.
Where do we go from here?
Assessment alone is not going to save (or sink) education reform or make education systems more equitable. Building assessment programs and tests that are more valid, reliable, and fair for all students will be a lengthy process, particularly if we spend time thinking about what should be measured and why.
Acknowledging our limitations and our role in the process (i.e., stuck in the middle), however, there are some immediate steps that we can take as we move forward from the pandemic that can also guide us toward the next reauthorization of ESEA.
- Right-size and right-purpose large-scale state summative assessment programs.
- The primary purpose of state summative assessment programs is to fulfill the state’s need to efficiently collect comparable, high-quality data about student achievement on the state content standards (i.e., proficiency) that can be used to generate information to evaluate the performance of schools and districts across the state.
- To the extent that an on-demand test(s) is part of this process, keep it short and focused on determining the magnitude of student proficiency.
- Take advantage of Student Information Management Systems to collect additional information about student proficiency directly from schools and teachers.
- Do the same for interim assessment.
- Interim assessment is a valuable tool for school and district administrators and teachers to monitor progress of groups of students throughout the year. Currently, most of interim tests are about the right size and administered with the right frequency to serve this purpose.
- Don’t expect too much more from it.
- Place the bulk of the reform effort into supporting high-quality curriculum, instruction, and assessment at the classroom level (i.e., where the teachers and students are).