The loss of state assessment results in the wake of COVID-19 does not have to mean a loss of information about student proficiency
Given that the COVID-19 pandemic is affecting nearly all aspects of our lives, it is not a surprise that it has brought a critical nationwide, federally mandated data collection effort to a halt. I am not referring to Census 2020 which was forced to suspend all of its field operations. Nor am I referring to the IRS and Tax Day which has been moved from April 15 to July 15. No, the nationwide data collection effort to which I am referring is the annual administration of state assessments to millions of public school students in grades 3 through 8 and high school.
With school closures affecting more than 55 million students across the country and nearly all states obtaining testing waivers, it is nearly a certainty that there will be not be and should not be state testing this spring. This cancellation of testing causes a significant hardship to assessment contractors and more importantly, deprives states of information used to inform policy, a condition which if we believe in the reasons for testing, ultimately is harmful to students.
The “good news” is that data that would have been collected through state testing is not lost. Like the data that is collected through the census or tax filings, data on student proficiency on state standards in the 2019-2020 school year is still there waiting to be collected. We may just have to adjust our thinking on what data we are collecting and how we are collecting it.
State Assessment is a Data Collection Effort
First and foremost, we have to recognize that state assessment is at its core a data collection effort. Because the current solution includes an assessment, we have fallen into the trap of viewing the task of collecting data on student proficiency from a measurement perspective and treating all challenges to the process as measurement problems. The fundamental task, however, similar to the census, is to produce an accurate count of the number of students in the state who are meeting state achievement standards. The task is not to measure student proficiency.
It can certainly be argued that at one time the most accurate and efficient way to collect the desired information was through an assessment administered to students statewide. That solution, however, became less desirable over time as state content standards became more complex, state achievement standards became more rigorous, requirements to include all students became more rigid, and the consequences associated with the results of the assessment increased (see Campbell’s Law).
At the present time, the current model of state assessment is fast becoming an anachronism; perhaps not as much of an anachronism as annual tax filings, but more of an anachronism than the census, simply due to the frequency of state assessment if for no other reason.
States have known since at least the 1990s that an on-demand test composed primarily of selected-response items was insufficient to fully measure student proficiency, but for 25 years that remained the most feasible and efficient solution. The PARCC assessment, however, was likely the field’s gallant last gasp at developing an on-demand state assessment to measure college-and-career readiness standards.
Moving forward, state assessment will still be at least a component of the best available solution to compile accurate information about student proficiency, but assessment is not the only solution.
There are Proficient Students Even if there Is No Assessment
There may be doubt about whether a tree falling in the forest makes a sound if no one is around to hear it, but there is no such doubt about student proficiency.
After accepting that the task is to count, not measure, we must recognize that students are proficient (or not) in English language arts, mathematics, science, and a host of other areas regardless of whether we administer an assessment.
Over time, the belief became ingrained that we need a state assessment to determine whether a student is proficient. The state assessment and its items defined the meaning of loosely worded state content standards. Achievement level descriptors were most often developed for the assessment rather than the content standards; and were used in conjunction with the unfortunately named process of standard setting to define the state’s achievement standards. In short, the state assessment system and student proficiency became a closed system.
Federal policy that decreed performance on state assessment as the gold standard for student proficiency and elevated alignment to state content standards as the most important evidence in the validation of state assessment programs only helped to keep the system closed.
The fact remains, however, that students acquire proficiency in English language arts and mathematics through curriculum and instruction aligned to state content and achievement standards. That proficiency builds over the course of the school year and resides within the student, not within the test, when she or he sits down in the spring to take the state assessment.
Our actions as assessment professionals, policy makers, and educators, suggest that we have forgotten the principle that the purpose of assessment is not to define a construct such as proficiency in English language arts and mathematics, but rather to provide us with information that helps us to be able to accurately and consistently distinguish among students at various places along the proficiency continuum.
Teachers Should Be the Best Judges of Student Proficiency
If we accept that proficiency exists outside of the assessment then if follows logically that the best judge of a student’s proficiency should be the teacher who a) has deep knowledge of the state content and achievement standards and b) has been instructing that student for seven months with a curriculum, instruction, and formative assessment practices aligned to those standards. Setting aside for now debate about the extent to which the two conditions are met in classrooms across the country, nobody is in a better position than the student’s teacher to make an informed judgment about student proficiency.
There are, of course, many reasons why states do not and should not rely on teachers’ judgments alone when collecting information about student proficiency for school accountability. Concerns about self-reporting of results for accountability purposes are real; as is the fact that one of the primary things that we are measuring or evaluating through school accountability systems is the extent to which there is alignment between the state’s and local educators’ understanding of the state content and achievement standards.
The current situation, however, presents an opportunity to collect those teacher judgments with minimal risk. First, accountability waivers will eliminate high stakes uses that might bias judgments. Second, most states have data school- and student-level data from previous years against which to monitor these judgments.
The next critical question is whether enough instruction has taken place to enable teachers to make the necessary judgments. The answer to that question is unequivocally yes. If state testing had already started or was about to start within the next month, teachers have sufficient evidence to make an informed judgment of student proficiency. I would argue that teacher judgments about student proficiency at the time of school closures is a more accurate reflection of the level of proficiency a student acquired during the 2019-2020 school year than an assessment administered when school resumes either this year or next year. There will be other reasons for measuring student performance at that time.
So, if teachers have the data that states need, is there a feasible way for the state to collect it?
Collecting Data from Teachers on 2019-2020 Student Proficiency
With relatively minor adjustments, it should be possible to use the same infrastructure already in place to administer state assessments to collect teacher judgments of student proficiency. Given that testing was about to begin, we can assume that student registration lists had already been prepared to sign students into computer-based tests and that procedures were in place to provide access to teacher test administrators as well. States or assessment contractors may not have access to information needed to assign individual students to specific teachers, but that is a minor inconvenience.
Preparing online resources, instructions, and a form for teachers to enter ratings of student proficiency would not be a heavy lift, certainly not in comparison to scoring, processing, and equating tests. States and their contractors can decide what judgment they would like teachers to make.
Using the NAEP achievement level categories of Below Basic, Basic, Proficient, Advanced as an example, a state might ask teachers to assign students to one of the four achievement levels or simply to indicate whether the student’s level of proficiency was at the Proficient level or above (i.e., Proficient or Advanced). In activities conducted in conjunction with standard setting for a state assessment, we have asked teachers to designate students’ proficiency as Low, Medium, or High within one of the four achievement levels (a total of 12 possible classifications). My personal preference, however, is to allow teachers to use borderline categories as shown below for a total of 7 possible classifications: Below Basic, Borderline Below Basic/Basic, Basic, Borderline Basic/Proficient, Proficient, Borderline Proficient/Advanced, Advanced.
Will the results of the teacher judgment process be totally accurate, complete, or interchangeable with assessment results? Probably not, but that’s OK. They can still become useful information to support the school improvement process.
More Than A Stopgap
If I viewed the collection of teacher judgments only as a one-time stopgap to make the best of the 2019-2020 school year, I might hesitate to suggest it. It is a fact, however, that if we have any hope for education reform and school improvement efforts to be successful, we need teachers to understand what proficiency is and to be able to classify student performance along the proficiency continuum.
One of the big unanswered questions when state assessment results are released each year is whether those results are consistent with the way that local administrators and teachers perceive their students’ performance.
It is also a fact that we are not going to be able to continue to use on-demand large-scale assessment measure the complex knowledge, skills, and abilities that we want students to acquire. It is inevitable and desirable that in the near future states are going to have to rely on teacher judgment of student performance as key part of the information they collect from schools each year.
Given the conditions in a particular state, it might be foolish for state assessment leaders to consider any type of data collection in the coming weeks or months. However, if a state is seeking a way to recover data lost from cancelling testing in 2019-2020, we have a unique opportunity to begin to take the first step toward collecting that information. We might as well use it.