In my last post, Right Before Our Very Eyes, I called on assessment specialists to focus their efforts on understanding and explaining observed scores. This post, which could have had the same title, is a reminder that, in general, educational assessment involves dealing with known quantities.
The purpose of 99.9% of educational assessment is not to discover something new or unknown. Rather, in broad terms, the purpose is to apply what we know to collect information that will support decisionmaking, whether the decisionmakers are policymakers in the state house or teachers in the classroom.
My focus on what is already known may seem counterintuitive because by its very nature assessment seems to be steeped in unknowns. We assess because we don’t know. We are seeking information that we don’t already have. If we already knew the answer, there would be no need to ask the question – all of those carefully constructed, time consuming, and fairly expensive sets of questions. That’s fair.
Educational assessment does involve collecting information that somebody needs but does not have. But somebody not yet having information is quite different from that information being unknown.
Whether we are considering large-scale state summative assessment or applying formative assessment processes in the classroom, we are starting from a position of knowledge. Given that state assessment and formative assessment are very different processes used for very different purposes, it should come as no surprise that they do differ significantly in what is known and by whom it is known.
Large-Scale State Summative Assessment
In state testing, information on student proficiency is the unknown information that policymakers need but do not have. The primary purpose of state testing is to provide state policymakers with information about student proficiency (at the aggregate level) to inform decisions about programs and policies related to school quality/effectiveness, opportunity-to-learn, equity, etc.
That information about student proficiency, however, is known by and readily available to every other key participant in the system without the state test – or at least it should be.
That claim rests on my oft-stated premise that students already possess the ability, achievement, or proficiency that you are “assessing” before you administer the state test.
If that premise is true, a lot of people should know in advance what the test results will show. Anybody familiar with the student should have a very good idea. The student should know. The student’s teachers definitely should know. Parents, friends, and perhaps general acquaintances might also know.
At the time of testing, school administrators and faculty should be able to sit down and fill-in virtually all of the information commonly provided about their school and students in a state test School Report. In fact, I have heartily recommended this as staff bonding and reflection activity while awaiting the return of state test results.
The exception, of course, would be scores and information that require data from outside of the school such as district and state results or growth scores. This comparative, or norm-referenced, information should be the primary value-added of state testing for schools, and it is valuable information.
The concept that the state test isn’t generating new information about individual student performance (except perhaps for comparative information) is such a basic and fundamental concept that one has to wonder how we managed to lose sight of it.
What set of factors and forces led us to the prevailing view that the state test is necessary to determine student proficiency?
I assign equal blame, or responsibility, to the measurement community and the state testing community.
Let’s start with a quick review of the history of educational measurement and large-scale assessment.
An Overly Simplistic but Brief History of Educational Measurement
Or the 9 Circles of Large-Scale Summative Assessment
- See a group of people with a criterion of interest (e.g., intelligence, mathematics achievement, or some other aptitude or trait)
- Seek a simple measure that correlates well with and predicts the criterion of interest.
- Work on improving the measure.
- Realize that quality of the criterion affects the accuracy of the measure.
- Work on improving the criterion.
- Continue to work on improving the measure
- Confound the measure and the criterion.
- Conflate the measure and the criterion.
- Forget that there is an external criterion.
We can question whether our measurement pioneers’ hearts and minds were in the right place when they started this endeavor in Circles 1 through 3. That’s neither here nor there for this discussion. Somewhere along the line, folks realized that the criterion group had to include people from more than one race, gender, culture, socioeconomic class, etc. That was all proper, well, and good and work continued through Circles 4 through 6.
As the verbs suggest, things started to go astray in Circle 7 and 8. As psychometricians worked to improve their measures they also decided that they needed to collect more accurate information about the criterion on which to test those measures. Attacking the same construct from two angles (measure and criterion), it was inevitable that the two strains of work would blend into one.
Finally, a couple of decades ago we reached Circle 9 where measure and criterion collapse into a single black hole of measurement where content standards are torn apart into their smallest bits.
A bit overdramatic perhaps, but this is educational measurement we’re talking about.
The problem is that by forgetting that there is an external criterion, in this case tens of thousands of students with known levels of proficiency, we lose the opportunity to incorporate that information into our test development, analysis, and validation (including cross-validation) processes.
Meanwhile, in state testing…
The ABCs of State Testing in the Standards Era
- Develop and disseminate state content standards
- Develop state assessment based on state content standards
- Create achievement standards based on performance on state assessment
From the beginning, it seems, both testing folks and educators were perfectly happy to build a wall separating state testing from the important work of curriculum, instruction, and assessment.
Come in, give your test, don’t take up too much of our time, and be on your way. We’ll get back to doing our thing. That was the social contract.
The state assessment program was allowed to instantiate the state’s content standards – as George Madaus warned. Even worse, the task of defining the state’s achievement standards, too, fell within the purview of the state assessment program. Achievement level descriptions and achievement levels did not exist outside of the state assessment program. Consequently, the concept of student proficiency was inextricably linked to student performance on the test.
When the social contract changed under NCLB holding districts, schools, and ultimately educators, accountable for student performance on the state test, human nature and Campbell’s Law only solidified the view that student proficiency was defined by the state test.
And here we are today.
Formative Assessment Processes in the Classroom
I’ll try to wrap up this post on a positive note.
With formative assessment, student proficiency on a particular standard, task, or concept is unknown and the teacher and student are seeking to collect information to help the determine next steps in instruction.
What is known, at least by teachers, is what information to collect to determine gaps in student proficiency. Most importantly, teachers also know when and how to collect that information so that it will be most useful to the teacher and student during instruction.
While acknowledging the central role of the student in formative assessment, I think that I still view a teacher’s pedagogical content knowledge as critical for creating the environment in which a student’s self-assessment, processing of information, and metacognition can thrive and lead to fruitful outcomes.
Ultimately, it is through cycles of formative assessment during instruction that we reach the point described above where at the time of state testing, teachers and students are well aware of the student’s level of proficiency.
Pulling it Apart to Put it All Together
My purpose with this reminder that student proficiency exists and is known outside of the state test (or any test) is to encourage the process that I discussed last week in which achievement standards and achievement level descriptions derive from the standards and are more closely associated with instruction than with state testing.
I am convinced that accomplishing that task is a necessary first step to achieving two critical goals:
- Achieving the goal of deeper learning for students, which cannot occur when achievement standards are constrained by the limits of state testing.
- Restoring a proper balance between measure and criterion in state testing, our own little corner of the educational measurement world.
Image by Stefan Keller from Pixabay