To Those Burrs In Our Saddle

I truly enjoyed reading the many posts describing the amazing work showcased at NCME in LA; and the positive, uplifting experience that the conference was for everyone. But in this week’s post I want to acknowledge the contribution of those who take it upon themselves to poke, prod, and noodge at every presentation and in response to every post.  It takes a village and they are part of ours.

On Scales, Achievement Standards, and Trends

A bonus blog post in honor of #NCME2026 conference week. Last month I tugged on Superman’s cape when I suggested that preserving the NAEP trend might not be in our best interest. Today, I refresh a presentation from the early days of the CCSS, PARCC, and Smarter Balanced to clarify that reporting a NAEP trend is not the problem. Rather, the problem may be in the way that we in educational measurement and assessment tie trends to fixed achievement standards and scales.

How I Spent My Winter Mornings

They say that you can’t teach an old dog new tricks, but this past winter I decided to try to try my hand at solving the New York Times Crossword, something I had avoided doing to this point in my life. Along the way I acquired a little proficiency in solving crosswords, I remembered some important lessons about teaching and learning new skills, in general. Time will spent.

The Significance* of NAEP

I wrap up my March series, NAEP by the Numbers, with the number .05 and a discussion of significance and differences. The significance of NAEP lies far beyond score differences within and across years that are statistically significant at the .05 level. Much of what makes NAEP significant is that it is different. Different from state tests. Different from tests administered by schools and districts. It serves a different purpose. A purpose for which it is well-designed. Simply put, NAEP is NAEP.

Batting .500 with the NAEP Scale

In the second post of my NAEP by the Numbers series, I reflect on the NAEP 0-500 scales: both the Long Term Trend scale that stretches back to the 1970s and the new scale developed when NAEP began reporting state results some 35 years ago. At times, impressive. Other times frustrating. Love it or hate it, there’s nothing in our field quite like the NAEP scale.

20 at the 10th

For March, I’m planning a series of posts looking at NAEP by the numbers. The first two numbers are 20 and 10, as in the 20 students performing at the 10th percentile in reading and mathematics in a typical NAEP state sample. We’re all concerned that the bottom has been falling out of NAEP results, but my question is just how well we understand who those 10th percentile students are.

The 10th percentile sitting out there 1.28 sd from the mean is kind of an abstract concept, but a classroom-size sample of 20 kids is something we should be able to wrap our heads around.

They Told Me There’d Be Consequences

The Olympics are over and it’s a blizzardy Monday morning. In other words, it’s a perfect time to peruse the preliminary program for the upcoming NCME annual meeting. Of course, every action has consequences. In this case, the consequence is a blog post about consequences. I’ll admit that I have no idea who John Ruskin is, but as I read through the program, I couldn’t help but think of these words of his, “What we think or what we know or what we believe is in the end of little consequence. The only thing of consequence is what we do.”

Frankenstein’s Graduate

In releasing the interim report outlining its new graduation Framework, Massachusetts boasts, “no other state will have implemented such a comprehensive approach to setting such high standards in education…” 

My response, as the kids say, Sick brag, bro. 

I’m not exactly sure where having “such high standards” compared to other states fits in the validity argument. I would be much more impressed by claims and evidence of having carefully identified the right graduation standards for the future and having a solid implementation plan for achieving those standards.