IADA: Starting from Scratch

I suggested in my last post that the word “innovation” does not means the same thing to those of us in the assessment community that it does to the rest of the world. That in itself should not come as a surprise to anyone familiar with our work. Lots of words mean something different to us than they do to just about everyone else (see, alignment, bias, significant, grade-level), and there as just as many words on whose meaning we cannot seem to agree among ourselves (see, validity, growth, or even assessment).

One could make the claim that the shift from paper-and-pencil to computer-based testing was a major innovation in large-scale testing, perhaps the greatest innovation in more than a century. My educated guess, however, is that few people within the fields of measurement and assessment would look at the current crop of computer-based state tests, even the adaptive ones, and refer to them as innovative assessment. That sentiment is probably even stronger among educators. Innovation (e.g., technological advances) may be a necessary, but not sufficient, precondition for innovative assessment.

I also suggested that our concept of “innovative assessment” is often linked with doing something different and not simply with doing the same thing differently. We don’t want to assess the same thing more efficiently, or even more effectively. We want “innovative assessment” to assess something different.

That something different may be implementing and assessing the Common Core State Standards (CCSS) or the Next Generation Science Standards (NGSS) or 21st century skills – standards that represent content and processes in a way that is fundamentally different than what was taught and tested before. That something different may be introducing a new interdisciplinary content area and related assessment, such as Louisiana’s combining of social studies and English language arts to form Humanities. That something different may be a new focus on and commitment to ensuring that our assessment processes and test instruments support deeper learning, student engagement, and/or culturally responsive-sustaining education.

All of which are well-intended, worthy, and arguably, critical goals.

All of which will make life more challenging, at least in the short term, for all involved – measurement theorists, assessment specialists, educators, and even students. As I wrote in a series of CenterLine posts in 2019 and described somewhat graphically in a 2021 presentation, while most innovation can be touted as making life easier and better for those involved, innovative assessment at best can claim the latter. Like daily exercise, a healthy diet, clean living, a trip to the dentist, and hard work, the payoff for innovative assessment is often based on the premise that sacrifice now will lead to a promised payoff down the road – always a hard sell, but even more difficult now in our instant gratification and crisis culture.

All of which require changes, profound changes, not only to assessment, but also to curriculum, instruction, and perhaps the entire education infrastructure. Therein, my friends, lies the biggest problems for a humble little program such as the IADA. On top of making life more difficult in the short term, we really don’t have a proven assessment solution that we know will work with – or proven solutions in curriculum and instruction either for that matter.

It’s difficult to demonstrate something that doesn’t yet exist.

Allow Me to Demonstrate…

As described by USED, the Innovative Assessment Demonstration Authority provides eligible states with the opportunity to establish, operate, and evaluate an innovative assessment system.

Establish.

Operate.

Evaluate.

Missing from that list of action words are tasks such as conceptualize, design, develop, and test (e.g., field test, pilot test). I do not believe that omission is either unintentional or misguided.

Nothing about the IADA from its name (i.e., Demonstration), to its requirements (technical quality, comparability), to its scope (a fully scalable program within five years), to the funding connected to it (i.e., none) suggests that it is a program intended to be used by states and their partners to attempt to design, build, test, and implement an innovative assessment system from the ground up; this is, from scratch.

Rather, all of those factors suggest a program intended to allow states to implement statewide a system based on proven approaches.  The timeframe allows states to work out issues with logistics, to fine-tune the program around the edges, and to begin to gather evidence of the efficacy of the use of the approach as a statewide assessment system.

To attempt to build an operational innovative assessment system from scratch within the constraints of the IADA is a recipe for disaster.

To attempt to do so while simultaneously asking school districts to implement a major curricular and instructional change is, well, ….

First Encourage Me to Innovate…

Let me be clear that it not my intent to suggest that states and their partners should not engage in the design, development, and implementation of innovative assessment and curricular/instructional programs, or ideally an innovative approach that fully integrates curriculum, instruction, and assessment.

States with the proper capacity and will to do so, most definitely should engage in such activities both on their own and with academic and industry partners on the cutting edge of innovative assessment.

I am also not suggesting that the federal government should play no role in supporting the development of innovative assessment programs, including providing financial support both to states and private companies. Through a variety of grant programs, the federal government has a solid track record in supporting the development of innovative alternate assessment programs and English language proficiency tests as well as the development of innovative tools to support inclusive assessment in all content areas.

I am suggesting that the design and development of innovative assessment solutions is not likely to occur within the constraints of a program such as IADA, and that’s as it should be. When an innovative assessment has been developed, tested, and is ready for primetime, then by all means, a state should take advantage of a program like the IADA.

Finally, I am urging us to recognize the difference between innovation/experimentation and demonstration/implementation. We have been down this road before with laboratory schools and demonstration schools, terms that we tend to use interchangeably. To reinforce that point, I close with a quote from a 1969 report from Indiana State University that could just as easily apply to the way that we have thought of the IADA:

Inherent in the dream of the campus laboratory school were conflicting functions proposed for the school and conflicting perceptions on the part of the human beings involved… The dream contemplated no conflict between the demonstration-observation-participation functions and the research-experimentation-inservice functions, but the conflict exists…

Image by Sophia Martin from Pixabay

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..