Best In Show

If you are like me – and it occurs to me that this may be the first time I have ever typed that phrase – your total exposure to dog shows consists of watching the National Dog Show that immediately follows the annual appearance of Santa at the Macy’s Thanksgiving Parade on NBC (and perhaps new reports about the winner of the Westminster Dog Show). Well, there was high drama this year as for the first time in the show’s 20-year history a Scottish deerhoud, Claire, successfully defended her title as Best In Show.

While digesting my apple pie later that afternoon, I began to wonder how they chose Best In Show from among the winners in the seven groups of dogs – sporting, hound, working, terrier, toy, non-sporting, and herding. I’ll admit that I haven’t given much thought to the judging criteria. We usually just watch, pick out the cutest dog in each group, are disappointed for a second when a strange-looking dog wins, and then wait for the next group to come out.  As it turns out, however, the Best In Show process is quite interesting and may serve as an example to those thinking about the next generations of assessment and accountability systems.

The brief and admittedly novice explanation goes like this:

The Best In Show winner is not based on a direct head-to-head – or tail-to-tail – comparison of the dogs.  The dog selected as Best In Show is the one who most closely matches the breed standards established for its group. That is, it’s not a question of whether the hound group winner is “better” than the terrier group winner. Rather, it’s a question of whether the hound winner comes closer to perfection in the meeting the hound standards than the terrier does to meeting the terrier standards.

Each of the groups has its own sets of standards against which the breeds of dogs within that group are judged. (Apparently there are close to 200 breeds represented across the seven groups.) Those criteria are based on the type of skills the dog is expected to perform or the traits it is expected to possess. Hunting dogs are different than herding dogs which are different than toy dogs. Form Follows Function.

We shouldn’t blindly seek comparability, in the sense of direct head-to-head comparisons or interchangeable scores, when there is no reason to expect two dogs, students, teachers, or schools to meet the same set of standards.

School accountability systems under NCLB and ESSA have centered on students achieving state standards in English language arts and mathematics – understandable given their roots in Title I.  Two truths, however, have been obvious to all involved from the very beginning:

  1. Schools are supposed to do much more than teach students English language arts and mathematics.
  2. All schools, and programs within schools, are not designed to accomplish the same goals.

The first premise needs no further discussion. I will spend a little time on the second premise.

Beginning at the secondary level, my colleague Chris Domaleski has written about the need to consider accountability different for alternative high schools. The same argument could be made, however, for vocational/technical schools v. college preparatory schools v. performing arts schools v. “traditional” comprehensive high schools and the various programs housed within them. Are there some common elements, purposes, and goals across the various groupings or breeds of secondary schools? Of course, but the differences in what those schools are designed to accomplish is what defines those schools and must be reflected in school accountability systems.

Secondary schools, however, are just the tip of the iceberg. I think that most of us would nod in agreement at the statement that elementary, middle, and secondary schools serve different purposes. I also think, however, that few of us – particularly those of us on the outside designing accountability systems – appreciate just how profoundly different the purposes and goals are among elementary, middle, and secondary schools. Those differences must be reflected in school accountability systems.

The best laid plans of mice and men often go awry

If the best laid plans often go awry, it should come as no surprise how often and how badly our plans to differentiate standards among students and schools have gone awry. That’s not sufficient reason not to try again. The only thing worse than all of the attempts to differentiate that have ranged from disastrous, like tracking, to less than successful like similar school bands in the 1980s, safe harbor under NCLB, or different “long-term” goals for subgroups under ESSA is the decision to try to hold all students and schools to the same standards.

The same holds true for conditional analyses, value-added models, and regression-based statistics developed in an attempt to level the playing field.  Such approaches, which can have disastrous effects if used to mask differences in outcomes or perpetuate differences in opportunities, can be powerful tools in the effort to improve schools and student learning when used appropriately. A critical step, of course, is not to stop with the conditional, or norm-referenced, information on its own but to combine it with criterion-referenced information, examine performance over time, and to figure out what needs to be done to change the “expected performance” under current conditions.

The purpose for building models of virus spread or weather patterns or student growth or high schools’ promotion power is never simply to describe the current situation, but rather to use the models to prepare for and, hopefully, improve future situations. Constantly in the fog of producing annual accountability reports, we tend to forget that aspect of all of this.

They say that we can learn a lot of valuable life lessons from dogs like get excited when you see your best friend, show your loved ones just how much you care about them, and make sure you get enough sleep. Perhaps this year we can also learn an important lesson from the dog show – there is more than one way to think about evaluating schools and school accountability.

Image by LRuss from Pixabay

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

%d bloggers like this: