Once again, those of us on the technical side of large-scale assessment and educational measurement find ourselves behind the curve.
In the 1990s, the public clamored for achievement levels and criterion-based results while we were comfortable reporting percentile ranks and grade equivalent scores.
Just as we were “getting a handle” on standard setting and percent proficient, NCLB made annual testing at grades 3 through 8 a reality and growth the “score du jour”.
We figure out operational best practices for creating and equating custom annual state tests and BOOM – it’s computer-adaptive testing and mix-and-match tests pieced together from item banks.
Now it’s 2022 and the song of the summer is “accelerated learning” – now it is critical that students and schools accelerate learning like never before.
The “learning acceleration train” is pulling out of the station and picking up speed. We are not prepared to jump aboard, let alone shovel the coal, or sit in the front seat and blow the whistle.
What’s a large-scale assessment specialist to do?
Acceleration, Learning, and Large-Scale Assessment
You may be asking yourself whether accelerated learning really is different from what we’ve been doing the last 20 years – particularly with regard to large-scale assessment. NCLB “required” schools to get 100% of kids to Proficient. The Common Core and ESSA set the expectation that all students would graduate from high school college-and-career ready. Didn’t those initiatives require accelerated learning?
Well, yes and no. Or yes, but we didn’t know. Or yes, but we didn’t want to know.
With NCLB, the focus was on focus. Schools just weren’t focused on teaching the right thing. If we establish state content and achievement standards, test every year, tutor a few kids, and hold schools accountable they will do the right thing and 100% of kids will be Proficient.
College-and-career-readiness was in many ways more of the same with standards ratcheted up a few notches and a little more focus on the quality of instruction (for a few minutes).
The thing about accelerated learning is that both words in the term – acceleration and learning – present a challenge to those of us involved in large-scale assessment.
As much as it pains us to admit it, we are not part of the learning process. I do not mean to suggest that Large-Scale Assessment cannot, or does not, have an effect (sometimes positive, often negative and outsized) on the Learning Process – but we’ll save that discussion for another post.
Our newfound friends in formative assessment are directly involved in the learning process.
Our associates working directly with schools and districts on curriculum and things that take place in the classroom, they might be involved in the learning process.
The rest of us – not so much.
Our colleagues involved in interim assessment provide information to help districts and schools monitor the results of the learning process a few times per year.
The efforts of those of us involved in end-of-year state summative testing have been directed at measuring one important outcome of the learning process – student academic achievement, which we have packaged at various times as basic skills, literacy (e.g., 3rd grade reading), proficiency, or college-and-career readiness.
To borrow Black and Wiliam’s imagery, as far as large-scale assessment is concerned, Learning is very much a black box that we cannot see inside. Our relationship to Learning is quite simple, straightforward, and very indirect.
Learning produces Achievement. We design a Large-Scale Assessment program to funnel a limited amount of information about Achievement into a format useful to inform policy. And note that I drew Achievement as a gray box because we really only know slightly more about the Achievement that our tests measure than we do about Learning.
Nothing about this description of the relationship between Learning and Large-scale Assessment should be news to anyone working inside or outside of the black box. Nonetheless, it bears repeating.
Acceleration is defined generally as an “increase in speed or rate” or in physics as “the rate of change of velocity per unit of time.”
There are a couple of key points to note in those definitions.
The first is that acceleration involves determining change and we all know that the simply hearing the word change sends shivers up and down the spine of anyone with even a modest understanding of educational measurement, statistics, or large-scale testing.
The second is that acceleration involves a change in speed or velocity. The assumption, therefore, is that you know the speed or velocity of the object that you are measuring – and that you know its velocity (or average velocity) across a time span.
To state the obvious, if we (i.e., large-scale testing folks) know next to nothing about what is taking place within the black box, we probably know very little about the speed or velocity of the learning occurring there. It’s not been on our radar.
What about achievement? Is achievement an adequate proxy for learning? Again, a lengthy topic for a later day, but for today let’s say that it is.
What can we say about speed and acceleration with regard to achievement?
We are in somewhat better shape with regard to what we can say about the speed of Achievement. We have vertical scales (going back to NRT days) that say that achievement slows down across grades K-12; and perhaps more importantly for the current discussion, we have achievement standards that say that expected changes in achievement also slow down across grades K-12 – at least achievement of those things we choose to measure. What’s up with that? I’m a bit skeptical about vertical scales, but I put more stock in the achievement standards.
We also have “measures” or “indicators” of student growth that quantify some aspects of the change in Achievement across two points in time. The very best of those, however, remain norm referenced. The very worst of those are based on meaningless score differences on vertical scales.
Aside from moving students from Proficient to Proficient across grades, we have struggled mightily with defining the concept of a “years’ worth of growth” let alone measuring it.
Let’s think of the learning acceleration issue in terms of calculus, the language of true psychometricians and traditional model builders,
- Step 1 is to arrive at some level of agreement on the latent trait, construct, or outcome that we wish to measure/model (i.e., Achievement).
- Step 2 is to develop (or discover if you are a quantitative romantic) the function that describes with some reasonable level of accuracy, a student’s position with regard to Achievement as a function of time.
- Step 3, Velocity would be the first derivative of that function.
- Step 4, Acceleration would be the second derivative.
- Step 5, Connect that second derivative back to instruction and student learning. Easy breezy.
We are still actively debating Step 1.
Accelerated Learning – How Best Do We Monitor and Support It?
If you have made it this far into the post, you are probably wondering how he is going to pull out of this free fall. How will he turn this falling rock of a post into something positive and uplifting?
Surely, there must be a way for large-scale assessment to monitor and support accelerated learning.
Well, yeah, there is. Here are a few recommendations:
- The most important thing that we must do to support accelerated learning, in all seriousness, is to not interfere, to stay the hell out of the way. There must be nothing in the rebooting of state assessment programs and accountability systems that makes it more difficult to accelerate learning or detracts from the goal of accelerating learning.
- We must ask our formative assessment colleagues how we can support them in doing their formative assessment thing. This is their show. It may be that they need our help collecting and interpreting implementation and process data more than they need additional large scale assessment data. We can do that.
- The evolution of large-scale assessment is naturally leading assessment toward the classroom and more embedded, curriculum-aligned, continuous assessment. That is a good thing. The evolution of psychometrics is leading toward the modeling of learning as well as achievement (no, modeling the processes used to respond to large-scale test items is not the same as modeling learning). That is also a good thing. We must not rush either in a well-intentioned, but terribly misguided, attempt either to support accelerated learning or to appear relevant.
- We must commit to clear, honest, communication and resist the pressure to say misleading things like we had the fastest (or largest increase) in test scores in history when that increase merely reflects a partial recovery from the largest decrease in history the previous year. Also, let’s avoid reporting increases or decreases in terms of meaningless percentages.
- We must get our own educational measurement and large-scale assessment house in order.
Beyond those recommendations, to the best of our ability, we must clearly define what is meant by accelerated learning. In general, we accelerate because we have fallen behind, need to escape danger, or want to reach a destination more quickly. It’s usually not a good idea to accelerate without one of those goals in mind.
We must expect that the term acceleration is not being used in a way that fits the standard definition. More importantly, we must be ready to accept that the terms acceleration and learning likely will have different meanings across different layers of the educational ecosystem.
Different meanings are OK. We are used to dealing with different meanings. A lack of clarity is not OK.
What does accelerated learning mean at the classroom level and how is it achieved?
- Does it mean that the learning process becomes more efficient, more effective, both?
- Does it mean that “more” learning occurs in a fixed period of time? If so, which unit of time is more critical: elapsed time or instructional time? That is, does it count as accelerated learning if the “rate” of learning/instructional hour remains the same, but I increase the number of instructional hours in a school year by 11%, 22%, 33% (i.e., 180 days to 200, 220, or 240 days)?
- Accelerate learning compared to what: the 2022 “rate of learning”, the 2019 “rate of learning”, something else?
- Accelerate learning for all students or increase the mean rate of learning by accelerating learning for students below the current 25th, 50th, or 75th percentile?
- Endgame? Accelerate learning to accomplish what, when? Will acceleration be short-term, long-term, permanent?
What does the outcome of accelerated learning at the classroom level look like on large-scale state tests in spring 2023 and beyond?
- Are we looking for year-over-year changes in student growth? [My guess is the answer to this question is yes.]
- Are we looking for simple increases in the percentage of students meeting performance standards or growth targets from one year to the next (e.g., 38% in 2022, 42% in 2023, 45% in 2024)?
- Are we looking for year-over-year increases in the change in percentage of students meeting performance standards or growth targets (e.g., 5 percentage point increase from 2022 to 2023, 8 percentage point increase from 2023 to 2024, …)?
- Is there a specific short-term or long-term goal in mind in terms of achievement or growth? Are there corresponding interim benchmarks?
- Are we trying to return to a pre-pandemic trend line in achievement or growth within a certain period of time? If so, is that an actual, empirical trend line or an aspirational trend line necessary to meet the state’s long-term accountability goals?
Do any of the questions from the classroom level or state test level apply to interim assessments administered throughout the year or do those assessments require an entirely different set of questions?
They say that for an elite athlete the game, and time, slows down when the pressure is greatest, and things seems most chaotic. If we are going to make a solid contribution to the effort to accelerate learning and to the recovery, in general, we are going to have to do the same.
We cannot slow down the “learning acceleration train” (and don’t want to), but by taking a deep breath, looking at the big picture, and staying within ourselves, we can “slow down the game”.