“Everything should be made as simple as possible, but no simpler.” – Einstein
In this post, I offer a Valentine’s Day gift in the form of three design principles to those states and their advisers still struggling with the design of school accountability systems to meet the requirements of ESSA.
Three principles for the ideal school accountability system:
- It should be made as simple as possible, and then a little simpler.
- It should trigger few, if any, automatic consequences.
- It should be obviously incomplete.
In short, when designing school accountability systems the mantra must be Keep it Simple States.
Of course, the concept of simplicity is not new. KISS as a design principle dates back more than 50 years. In school accountability, the Foundation for Excellence in Education has led the charge for states to meet federal accountability requirements “by having each school earn a single letter grade each year.” Their A – F grading system has been adopted by more than a dozen states across the country. In making their case for the simplicity of the A-F system, they state
The easy-to-understand A-F ratings are crucial for promoting transparency and establishing effective incentives for schools. Not surprisingly, these ratings have been incredibly popular with parents.
At the 2016 CCSSO National Conference on Student Assessment, Kentucky Commissioner of Education Stephen Pruitt offered this advice about school accountability systems:
I believe you should be able to explain your accountability system standing in the line at the grocery store. And right now, you’ve got to be in a 15 item or less line behind 35 people and they’re all writing checks, before you can actually explain the system. And so, we’ve got to find a way to make it simpler.
Many of us involved in the design of school accountability systems over the last two decades have found it difficult to accept the idea of simplicity.
- How can something as complex as school effectiveness or school quality be reduced to a single letter grade (A, B, C, D, F) or a simple rating?
- How can multiple outcome indicators such as academic achievement, academic progress or growth, graduation rates, and English Learner Proficiency be combined into a single composite rating that is valid, reliable, and fair for all schools?
- How do I possibly translate a set of regulations that vary from a mere 173 pages to 383 pages to a massive 1,029 pages dependent upon the format of the document you are reviewing into a system that produces a single summative rating that is meaningful?
Having considered questions such as these for more than 15 years, I have concluded that the answers are simply:
- It can’t be.
- They can’t be.
- You can’t.
But rather than serving as a barrier to designing school accountability systems, those negative answers are actually quite freeing. When we accept that a test-based school accountability system cannot possibly tell us everything that it is important to know about a school, it becomes so much easier to design the system. We can shift our focus from tinkering with complex metrics and decision rules that accomplish very little to the more important task of communicating with key stakeholders about what information the accountability system conveys about schools, what information it does not convey, and how best to use that information.
I am comfortable with the idea that a test-based school accountability system serves a critical, but very limited, function within the complex system designed to educate our children and produce productive citizens. I do not expect the school accountability system to protect our children from bears, nor do I expect the school accountability system to settle the great national debate between proficiency and growth (obligatory spoiler alert: both are important and useful pieces of information).
With that in mind, I give you my three design principles for the ideal school accountability system. Note that it is essential that you apply all or none three principles to the design of your system. Applying only one or two of the three will only lead to trouble.
It should be made as simple as possible, and then a little simpler.
Make the system as simple as possible. Then go back and make it simpler. Seriously, there are only so many ways that you can describe school achievement and growth on annual tests in English Language Arts and Mathematics.
Don’t try to pack too many different outcome indicators into a single composite score or rating. Doing so tends to confuse rather than clarify.
Don’t attempt to eliminate uncertainty from the system. You can’t. Instead, acknowledge and embrace uncertainty. Teach users to understand and work with uncertainty.
Don’t try to resolve philosophical issues such as whether school scores in a given year should be treated as a population or a sample. Rather, teach people how and why scores are likely to fluctuate from year to year.
It should trigger few, if any, automatic consequences.
Of course, the first principle can only be accomplished if the accountability system does not trigger a host of automatic consequences. Prior to NCLB, some states designed tiered accountability systems in which low test scores simply triggered additional data collection and deeper evaluation; which in turn, might trigger specific actions and appropriate levels of intervention or support to schools and districts. ESSA makes it easier to return to such an approach, but some additional creativity on the part of states will be needed.
Establishing exit criteria that consider factors other than the simple outcome indicators that flagged the school in the first place is a start. Soon, I hope to see states shift their focus from how to flag schools to what to do with schools that have been flagged by the accountability system.
It should be obviously incomplete.
Most important, it must be obvious to even the most casual observer that the test-based accountability system is incomplete and woefully inadequate to fully describe school effectiveness or school quality.
One of the biggest complaints about school accountability systems is that they do not measure all of the important things that a school does. Rather than trying to refute that argument, accept it. The first step toward accomplishing this is to limit the claims associated with the school accountability system. A second step is to demonstrate that you value other critical functions that a school performs. Reporting on school performance in those areas in as prominent a manner as you report the results of the test-based accountability system is one way to demonstrate their value.
A state should be able to explain why the factors included in the test-based accountability system are important. It should not put itself in the position of arguing that those factors are all that is important in evaluating the quality or effectiveness of a school.
Happy Valentine’s Day!