Charlie DePascale (assisted by the words of Thomas Jefferson)
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.
When in the course of human events, significant resources are allocated to educational assessment and important decisions are made on the basis of the results of educational assessment, it becomes necessary for the people to understand the Laws of Nature which govern educational assessment. Further, a decent respect for the people and the field requires that those with appropriate knowledge and understanding place before the people the common sense of the subject, in terms plain and firm.
Therefore, in educational assessment
We hold these truths to be self-evident, that reality is contextual, that we cannot eliminate uncertainty, that educational assessment is based on modeling, and modeling is not measurement.
In short, the fundamental truth that we hold to be self-evident is that there is no truth.
As the stakes of education have risen, so too have the demands placed on educational assessment.
- Whereas large-scale assessments were once administered annually, or even every two years, to students in select grade levels, we are now in a place where administering assessments to all students in every grade level multiple times per year is being considered as a viable option to reduce testing.
- Whereas it was once sufficient to report average performance for large groups of students on loosely defined content, it is now expected that assessment programs will produce detailed information about the achievement and growth of individual students against well-defined content and achievement standards.
- Whereas it was once acceptable to not assess large groups of students who did not fit the design of the assessments, it is now expected that assessments will be accessible to and provide actionable information about the performance of each individual student.
- Whereas large-scale assessments previously had few, if any, consequences for districts, schools, and teachers, the results of those assessments are now the cornerstone of accountability and education reform.
For the most part, the increase in demands placed on educational assessment has received the tacit approval of the educational measurement community. That is not meant to suggest that there have not been cautions raised on occasion and that individual measurement specialists or even groups of measurement specialists have not voiced opposition to certain policies; or to ignore to that some measurement specialists have based their research agendas on demonstrating flaws in assessment-based educational policy and that at least one prominent individual with a background in measurement has disowned the educational measurement community. The reality is, however, that as the demands increase, the tests keep coming.
Perhaps it is hubris – a belief that we can rise to the challenge and build assessments to meet any demands. Perhaps it is fear – a belief that if we are honest about the limitations of our field, the entire field will be rejected. Perhaps it is a matter of economics – we give the people what they want (or in a more negative light – we can convince the people that we have what they need and want). Perhaps it is simply resignation – a belief that we are powerless to stop the inevitable. Whatever the explanation, the tests keep coming.
Today, the focus is on acknowledging the self-evident truths of educational measurement; and by doing so, provide the people with information that they need to begin to become literate consumers and users of educational assessment.
Reality is contextual
When discussing assessment results, we tend to talk in terms of absolutes. We declare that Martha is proficient in mathematics; Alexander is college-and-career ready; or that Washington High School is an A+ school. Of course, the meaning of each of those classifications and the inferences one can draw from them is dependent on context.
There is no universally accepted definition of the specific knowledge and skills that mathematics comprises. Mathematics for a particular context is defined by the set of content standards adopted at a given point in time for a specific purpose. Similarly, the meaning of proficiency in mathematics is tied directly to those content standards. One can engage in a chicken-or-egg argument about whether the concept of proficiency should flow from the content standards or vice versa, but together content and achievement standards often form a closed system. It is clear that proficiency on the basic skills mathematics standards from the 1970s was different from proficiency on the world-class standards established by some states in the 1980s and 1990s, which was different still from the concept of demonstrating college-and-career readiness on the Common Core State Standards in mathematics.
In addition, an unfortunate facet of the current system is that the reality of the concept of proficiency in mathematics is often defined by the assessment instrument rather than by the content standards or achievement level descriptions. Yes, we strive for alignment between the assessment and the content and achievement standards, but any educator will tell you that the meaning of proficiency does not become reality until they see the first set of assessment results.
We cannot eliminate uncertainty
In reporting assessment results, we do not totally shy from the concept of uncertainty, but we do our best to downplay it. We routinely report results with relatively small error bands and explain that a student’s true score falls within that band. We explain that all assessment results are imprecise and contain measurement error, but suggest that such error can be controlled through better test design and/or accounted for through statistical techniques. In other words, we discuss the precision of the things that we can and do measure on the assessment.
Rarely, however, do we engage openly in discussions about the level of uncertainty surrounding the construct. Despite all of the attention devoted to alignment over the last 15 years, as a field we still have little understanding of the impact of varying degrees of alignment or of how to define and measure alignment to complex constructs. (Hint: It is not through counting items.)
More important, we have an insufficient understanding of how student performance on assessments can and should relate to student learning and achievement in a particular subject area over time. Whether one views the content from third grade through high school as a single construct, a set of interrelated constructs, or a network of learning progressions, there is a great deal of uncertainty about how student learning should progress on complex constructs, and that uncertainty is totally unrelated to the assessment.
Modeling is not measurement
The complexity of the constructs we hope to understand is one reason why our science of educational measurement is based primarily on modeling, probability, and prediction. The goal is to uncover and understand relationships among factors included in the model. Our field, however, favors discussing the precision and certainty implied by measurement rather than having to deal with the uncertainty of modeling.
Like the modeling used to predict the weather or the number of games our team will win next year, there is usefulness in the modeling associated with educational assessment. However, as in those systems, there is also uncertainty. There are multiple models and there is variability across the models. Beyond the models themselves, however, there is inherent uncertainty – factors that the models cannot predict. We all have experienced a day when a weather front moved more slowly or quickly than expected, ruining our plans for a cookout, ball game, or commencement ceremony.
We, therefore, specialists in educational measurement, must be willing to solemnly publish and declare that there is uncertainty in educational assessment, uncertainty that cannot be eliminated simply by building better assessments or better assessment models. And by doing so, pledge our support to preparing educators and policy makers who as consumers of assessments and assessment results are better prepared to work with uncertainty. As Voltaire wrote, “doubt is not a pleasant condition, but certainty is absurd.”