Running In Circles, Chasing Our Tales

Those of us of a certain age clearly remember that seemingly innocuous question from Roger Mudd that effectively ended Ted Kennedy’s 1980 campaign for president before it got started: Why do you want to be president? For anyone tuning into to that interview, friend and foe alike, it was extremely uncomfortable to watch Kennedy struggle, stammer, and ultimately fail to produce a coherent response.

I relate that dated story today because I have been experiencing that same awkward, uncomfortable feeling these past few weeks seeing people whom I admire struggle to talk about the “why” or the “what” or the “how” of assessment, accountability, instruction, or even the purpose and goals of education itself.

It might be a give-and-take on LinkedIn about the role and limitations of large-scale assessment in informing instruction. It might be a weekly newsletter reporting on a “new” approach to school accountability. It might be an article on competencies, personalization, or the latest attempts curb absenteeism and make school more engaging for students. It might be a discussion of assessing 21st century skills before it’s time to start thinking about the 22nd century. And for each one of those, there was likely a corresponding article, editorial, or post about how AI is going to make it all better, more efficient, or perhaps not.

These are the thoughts of people who, like Ted Kennedy in 1980, arguably are at the height of their game, the peak of their powers. They have given long, hard thought to these issues for a number of years, and yet they struggle. I struggle. You struggle.

We all struggle to tell our story. The words are all there, arranged in sentences that sound plausible, but there is something missing. They fall short of telling our story.

Why is it that in 2025 we are still trying to convey the same information about large-scale assessment and its relation to instruction that we were trying to convey in 2015, 2005, and 1995?

What is an effective school?

Why is it so difficult to describe what we want from assessments of 21st century skills, performance events, social-emotional competencies, etc.?

In assessment, I have seen the same scenario play out time and time again.

Assessment 101: If you can describe it, we can measure it!

If you can describe it, we can measure it!

That’s the one little phrase that topples the grandest of assessment plans like a house of cards.

At the risk of contributing to the god complex that plagues so many assessment experts and psychometricians, I might even say that phrase is in many ways the psychometric equivalent of “Let he who is without sin, cast the first stone.” Think about it…

You have a crowd all riled up and ready to take action on the latest and greatest concept, construct, or competency that will transform education and assessment. They issue an RFP and bring in the best contractors and consultants that the lowest qualified bid can buy. At the project kickoff meeting, the excitement is palpable. Then one of those top-notch technical experts scratches some notes on a piece of paper, looks up, and utters the phrase:

If you can describe it, we can measure it.

There is an uneasy silence around the crowded table. One-by-one the once enthused throng disperses as all the air is sucked out of the room. But because nature and assessment regulations abhor a vacuum, the core team that remains begins to hammer out a test design while with a gentle wave of the hand, the assessment guru calmly reassures them: You don’t really need to assess 21st century skills. Those aren’t the skills you are looking for. Move along.

Together they go about the business of building on-demand tests that despite the latest computer-based and automated bells and whistles don’t look or function all that differently from the commercial, off-the-shelf NRT we thought that we left behind in 1990. I’ve sat on all sides of that table, and the view is not satisfying from any perspective.

And then the scores from those assessment programs are fed into accountability systems.

Accountability: Some people have accountability thrust upon them.

As much difficulty as we might have crafting coherent internal and external tales about assessment, our struggles with accountability are tenfold because federal accountability (in contrast to state and local accountability) is a cloth wholly fabricated out of thin air by laws and regulations with no grounding in the reality of schools or statistics.

What is accountability and an accountability system?  Is it a rating, an evaluation, an intervention, all of the above, none of the above? What does the amalgam of proficiency, growth, and absenteeism it produces tell us about school quality or effectiveness? As Nate Bargatze might say, “Nobody knows.”

The fine folks at the Fordham Institute published a report last week promoting “a new way” to measure elementary and middle school quality. The report argues that preparedness for the next level might the appropriate focus for school accountability; that is, judge elementary schools on student preparedness for middle schools, middle schools for high school. High schools, of course, being judged on students’ readiness for college and career. Without commenting on the proposed methodology, conceptually the forward-looking approach is appealing. But it is also not a new idea.

Back in the days of testing at grades 4, 8, and 10, there was an implicit focus on preparedness for what comes next. In states that tied promotion decisions to test results at grades four and eight or graduation decisions to high school test results, that connection was more explicit.

Under NCLB, when it came time to develop general achievement level descriptions for the New England Common Assessment Program (NECAP) the distinguishing factor among achievement levels was the extent to which students could “demonstrate the prerequisite knowledge and skills to participate and perform successfully in instructional activities” aligned with standards in the grade they were entering.

With RTTT and ESSA came the shift to college-and-career readiness (CCR) and “on track” to CCR, which one would have assumed cemented the forward-looking focus of accountability: Are kids prepared for what’s to come?

But still we struggle to tell even that story. Something gets lost between the achievement level classifications derived from tests and the ratings churned out by accountability systems.

I cannot see how adding more components to the accountability rating in the hope of adopting more holistic or comprehensive approaches to deriving a school’s accountability rating would be sufficient to clear those muddied waters without first have a clear understanding and description of what constitutes an effective school and a clear connection between the accountability rating and that description.

It’s Not a Simple Story

But the story of what makes a school effective and whether a school is effective is not a simple story. As Jim Popham explained a quarter century ago, high achieving is not necessarily the same as effective. And as the Fordham report lays out, the story can become quite complex and convoluted when you try to account for all of the characters, plot points, twists and turns to build measures that are valid, reliable, fair, and timely.

Finally, the problem that we are trying to solve and the tale that we are trying to tell is one that goes against our basic instincts. We are conditioned to try to make sense of things by breaking them down to their component parts, their simplest form: the interaction between the student and the teacher; the interaction between the test item and student ability. Then we build out, aggregate, and scale up from there.

But that’s not how this story works. Education is a story of layers building upon layers, of nesting, of interactions and relationships, of chaos within order, of randomness within predictability. The odyssey between early childhood and postsecondary education is an epic tale that requires a different kind of story and different kind of storytellers than we have routinely produced. I am confident, however, that it’s story that we can tell as soon as we stop running in circles chasing our tails.

 

Image by Gerd Altmann from Pixabay

 

 

 

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

One thought on “Running In Circles, Chasing Our Tales

Comments are closed.