The Most Important Question

In states across the country, lots of questions are being generated as state policymakers and local educators pore over results from the Spring 2022 state tests in English language arts, mathematics, and science. More often than not, however, the single most important question of all is not asked directly.

 At the district and school level:

Do you agree with these results?

At the state level:

Do local educators (i.e., teachers and school administrators) agree with these results?

Even more than actual school or student performance (high, low, or in between) the answer to that question informs and guides next steps with regard to school improvement, recovery, or education reform, in general.

It’s simply a very different ballgame if the people in the schools have a different perception of their students’ level of proficiency than the picture that is being painted by state test results.

If there is disagreement, resolving that disagreement is the first step in any productive effort to support the school.

The first step to recovery is acknowledging you have a problem.

Questioning is Different than Asking the Question

Any teacher or school administrator who has looked at state test results has in some way formed an opinion about the extent to which they agree or disagree with the data before them.

It may seem silly, therefore, for me to suggest that the question of agreement with state test results is rarely addressed directly when questioning state test results is something of a national pastime – one that is certainly more entertaining than baseball.

Questioning state test results takes many shapes and stems from many sources:

  • The state test score is simply a single data point, and all that is associated with that argument.
  • Student engagement and motivation to perform well on the state test is lacking or non-existent.
  • The state test measures only a small portion of the state standards or a small portion of the curriculum.
  • Standardized testing is not designed to allow our students to demonstrate their proficiency.

“Disagreement” with state test results may also involve and reflect a degree of protective rationalization or a certain level of explaning/excuse making.

All of that questioning, however, is very different than determining the level of agreement between local educators’ perceptions of their students’ proficiency and the estimate of proficiency provided by state test scores.

Setting the Stage to Ask the Question

In this day and age there is no reason why states should not be collecting information about student proficiency directly from teachers. We have the technological capacity. We have the infrastructure in terms of student identifiers and enrollment information.

It would be very simple logistically, and not much of an additional burden, to ask teachers to provide a rating of the proficiency of each of their individual students at the time of testing, at the end of the school year, or both.

There are certain conditions, however, that must be in place for such an effort to be fruitful.

First, achievement standards and achievement level descriptions must be tied to the state’s content standards and not to the state assessment program.

Too many states build a wall around their state assessment program, its test forms, achievement standards, and  achievement level descriptions. It’s my personal opinion that this practice has only served to further isolate state assessment from content standards, curriculum, and instruction. Ultimately, the utility of the assessment program and the effectiveness of its signal regarding student proficiency are diminished.

A first step in connecting the achievement standards and achievement level descriptions with the content standards is to situate their development outside of the assessment program. The achievement standards and achievement level descriptions should be developed up front with the content standards.

The diverse set of content experts, educators, and other stakeholders involved in developing the state content standards should also develop the achievement standards and achievement level descriptions for that content area. The content and achievement standards may be developed simultaneously as part of an iterative process.

There must be a close connection between the content standards and achievement standards so that together they inform curriculum, instruction, and assessment.

Second, there has to be a concerted effort on the part of the state to ensure that local educators, beyond those involved their development, understand and are comfortable with the achievement standards.

Creating that necessary level of comfort requires a proactive outreach program, such as a This Is What Proficient Looks Like campaign. (Doesn’t proactive outreach sound better than professional development or training?)

Ideally, examples of “student proficiency” will be drawn from a variety of contexts – assignments, performance tasks, class projects, extended capstone projects, as well as performance on the state assessment – and will be presented in a variety of formats. In addition, the program should be ongoing and dynamic, curating a variety of examples of student proficiency from a variety of sources.

Proficiency may not be in the eye of the beholder, but the many ways that it can be demonstrated is a beautiful sight to behold.

Ultimately, classifying the proficiency of student work should be a relatively straightforward task for teachers, students, and parents.

When it comes time for a teacher to answer the question “Is Johnny Proficient? “, it really should not be a difficult judgment for them to make.

Third, there has to be a belief that student performance on the state tests reflects something more than the literal – a student’s performance on one particular set of items on a particular day. There has to be a belief that a test score generalizes beyond a specific test administration to support inferences about student proficiency.

Communicating information about the interpretation of any test score requires striking the proper balance between embracing generalization while exercising appropriate levels of caution.

Beginning with individual student scores and leading with the precept that all test scores contain error and that no high-stakes decisions should be based on a single test score, it is difficult to construct an effective message of generalization. That task becomes even more difficult when the audience is opposed to the testing program in principle and more receptive to arguments that trivialize test scores.

To be fair, beginning with aggregate, group-level scores and emphasizing the reliability of the test with an audience receptive to state testing will be equally problematic, but in the opposite direction.

As they say, if it were easy, everybody would do it. Nevertheless, of our own free will, we have embraced this task of describing endless shades of gray to an increasingly black-and-white world so we must see it through.

Fourth, there has to be a level of transparency and trust established with regard to how teachers’ judgments of student proficiency are going to be stored and used.

There is no need, and not enough space, for me to describe the myriad ways that individual teacher ratings of individual student proficiency collected by the state might be misused.

It is probably obvious by this point that that four conditions described above apply much more broadly than to today’s simple case of asking teachers to provide their judgment of student proficiency. The four topics discussed are preconditions for the successful implementation of any standards-based reform effort and any state assessment program intended to have utility beyond meeting federal assessment requirements.

Unfortunately, decades of viewing state assessment through a compliance lens preceded by an era in which achievement standards were an afterthought used solely to describe test performance did not produce an environment conducive to ensuring that those preconditions are met.

Asking the Question

Is [insert student name here] proficient in [insert content area here]? 

The question would not be worded exactly like that, of course, but you get the idea. Personally, I prefer a question that allows the classification of student proficiency into three broad categories: well above Proficient, well below Proficient, and a middle category for student performance that is within shouting distance of Proficient (falling either side of the line).

The messy in the middle category is sufficient for my needs, but actually asking a teacher to place student performance on one side of the “Proficient line” or the other should not be too big of an ask.

If the preconditions have been met and we can assume that teachers understand the state’s achievement standards, making a judgement about student proficiency against those standards should be a relatively straightforward task. In fact, it should be a judgment that teachers have been making and adjusting continuously in real time throughout the year.

Unfortunately, once again, we rarely ask teachers (or students) to put together all of the pieces of evidence gathered during a school year worth of instruction to make an overall judgement of student proficiency.

Traditional grading systems based on averages certainly don’t directly address overall proficiency. Standards-based grading systems may remove the evils of averages, but often also fall short of with regard to relating mastery of individual standards to a concept like student proficiency. Even competency-based systems may not be designed to lead a culminating competency that requires the application and integration of previously acquired competencies, the whole that is greater than the sum of its parts; that is, proficiency.

Some school and districts may be moving closer to the concept of proficiency with things like capstone projects or portfolios, but it’s still not quite the same.

Considered from this perspective, perhaps it is less surprising that we don’t ask directly whether local educators agree with the overall rating of student proficiency provided by the state assessment. The achievement level “score” on the state test may be the only indicator of overall student proficiency in a content area that exists in the teacher’s world or on student’s transcript.

And that means that we are back in the position of the state assessment providing teachers new information about the performance of individual students – which as I have written repeatedly is something that should never happen in a system that is functioning properly.  New information about individual student performance flows one way.

Which makes it even more of an imperative that we get to the point where we can ask teachers directly to make a judgment about student proficiency.

Nobody, especially those of us who have seen behind the curtain, wants to live in a world where the state assessment is the sole determiner of what student proficiency is and whether students are proficient.

Image by Catkin from Pixabay

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

%d bloggers like this: