“Use your words, Charlie. Use your words.”
That’s what they would say to me when I was young, and a situation became frustrating or overwhelming. It was very good advice.
By the time I reached first grade I had learned that with some well-chosen and well-timed words you could make people laugh, even adults. In high school, we learned to harness the power of words to reduce an opponent to a pile of rubble with a sarcastic turn of phrase that hit them where they were most vulnerable. In college, we learned that a well-crafted argument could have the same effect and was even more satisfying, a little less satisfying when your adversary was hopelessly overmatched. As a teacher and consultant, I learned how to use words to build up; the importance of choosing the right words (and props) to explain complex topics to eager and open minds.
I am going to need to draw on all of those skills to deal with my frustration over the continued love affair that folks in our field have with the concept of “years of learning” despite that love being unrequited. Talk about embracing the absurd.
It’s been a long time coming
There should be no need for me to contribute to the vast internet literature on the plethora of problems associated with the concept of years of learning. I could include links to dozens of articles published over the past decade or so, but as we have established, you don’t click on links in my posts; so either take my word for it or have at it yourself and google “year of learning” and “effect size,” or better yet ask your favorite AI engine to draft you a statement about years of learning and effect size.
I’ll bet dollars to donuts that it will return something like this human-generated statement found on a website:
Recall that an effect size of .40 equates to one year of learning over one year. Therefore, if a strategy holds an effect size of .80 that equates to two years of learning over a one-year period.
If it’s doing its AI thing well, the statement will likely include additional references to effect sizes anywhere between .16 and .70 that also “equate to one year of learning.”
But I would hope that no AI chatbot worth its salt would make the claim that if an effect size of .40 equates to “one year of learning” then an effect size of .80 “equates to two years of learning over a one-year period.”
That type of utterly baseless assumption requires a human touch and appears to be central to “years of learning” statements made by people from the tippity-top of the educational measurement field down to bloggers who have just read Hattie for the first time.
WT ever-loving F!
Almost absofreakinglutely nothing in educational measurement or assessment works that way. The ghost of Gene Glass’ measurement past is rolling over in SS Stevens grave.
What about the kids…
Remember the “good old days” when we weren’t concerned with students learning and could talk about a “year of teaching” rather than a “year of learning.” A year of teaching was so much easier to quantify. It was 12 of the 14 chapters in the Algebra 2 textbook the department head handed me the day before school started. (There was always one chapter we skipped and one we didn’t get to.) If you needed to be more precise, in a 180-day school year, a “year of teaching” was 133 lesson plans (accounting for days lost to testing, assemblies, weather, holiday parties, and much-needed digressions). Or if you were working for a state department of education, a “year of teaching” might be 900 hours of structured instructional time.
But quantifying a “year of learning” is not quite so straightforward. Let’s start at the very beginning, a very good place to start. What is that we (i.e., assessment and measurement folks) think makes a “year of learning” a useful concept or construct to educators?
Based on the way we talk about this intervention producing 2.4 years of learning or that instructional strategy yielding 3.2 years of learning, or the statement now accepted as gospel that a good or effective teacher can produce 1.5 years of learning, it appears that we are obsessed with the amount of learning that can be packed into a year; that is, a school year.
Once again, we have a focus on time devoid of content. No mention of what was learned? How long did it take students to learn it? Consider the following:
It is fairly common to see an article touting something like a certain 8-week math intervention had an effect size of ‘x’ which is equivalent to ‘y’ years of learning.
Newsflash: The amount of learning that your 8-week math intervention was able produce is by definition (and commonsense) equal to 8-weeks or 2 months of learning. Nothing more. Nothing less.
To provide useful context to those 2 months of learning, I might go with a statement like “as the result of our 8-week intervention, students in our sample were able to acquire [list specific skills], which student normally require 12 weeks of instruction to acquire” (or 10, 15, or 20 weeks, whatever the case may be). Such statements, however, are much rarer.
One could argue that the two statements above are saying the same thing, and technically that might be true, but they certainly come across very differently. And the second leads more naturally to follow-up questions such as whether the strategy be replicated in other environments, under different conditions, with other students.
It’s Not Rocket Science or Physics or Physical Measurement
When we (i.e., people who should know better) use a term like “year of learning” we are implying the existence of a constant. In the same way that a light-year is defined as the distance light travels in a year a learning-year (or year of learning) evokes the image of the amount of learning that takes place in a year.
However, if we trust any of the work we have done to this point in time (and I know that it’s popular these days not to) perhaps the one thing that we know with certainty is that a “year of learning” is going to mean something very different at the elementary, middle, and high school level, not only in the obvious way (what is learned), but in the more relevant how much is learned – at least in terms of the way we measure learning.
And we know that a “year of learning” does not mean the same thing for different subgroups of students.
Which means that we are forced to interpret “year of learning” in a similar manner to the convoluted way that we have to interpret vertical scale scores (i.e., by considering not only the scaled score but also the grade level in which it was attained).
Now, my daughter strongly and wisely advised me against going with my instinct here to make that point by doing something like using subgroup labels to effectively create a separate learning-year for each subgroup. I’ll leave that to you to do on your own.
But what is the alternative? Do we handle this like DIF and compare each subgroup “year of learning” to a dominant referent group? That approach sounds presumptuous and fraught with problems.
Or do we just compare each subgroup year of learning to a national average as we might do with states scores on NAEP? Useful, and I like a good national norm as much as the next guy, but comparisons to the national average are so last century. We can do better.
More importantly, the national average approach only makes sense if most applications of “year of learning” involve interventions and outcome measures that include a nationally representative sample. Unfortunately, they don’t. Often, we see “year of learning” computations and claims based on small-scale research studies subject to the idiosyncrasies of the individuals in the sample and custom outcome measures.
Further, the unitless nature of “year of learning,” like “effect size” and “standard deviation” from which it is derived, makes it easier for the lesser informed to make gross generalizations such as “recall that an effect size of .40 equates to one year of learning.” Under a narrow set of conditions that statement might be true. However, those conditions are rarely met.
It was my understanding there would be no math
Year of learning is commonly computed on the basis of differences in student performance on a pretest and posttest administered on either end of an instructional intervention.
Apologies for going technical so late in this post, but if a researcher has set up their study well and their intervention is effective (and we only hear about the effective interventions), think about the results and distribution of student performance of the pretest and posttest.
- Prior to the intervention and students being taught the material, students will not be able to answer most of the questions and the pretest will have a fairly small standard deviation.
- Immediately following the successful intervention when students have acquired the desired knowledge and skills, they will be able to answer most of the questions correctly and the posttest will have a fairly small standard deviation.
- Pooling these two small standard deviations and placing the result in the denominator will produce a large effect size (certainly one much larger than would be found by using an independent population standard deviation, which often doesn’t exist).
- The large effect size is then converted to multiple years of learning.
Neat how all of that works. Math is fun.
Yearning
Where do we go from here? Rather than suggesting that we treat work on “year of learning” as a dead end and shut it down, I recommend that we think outside of the box and jump into it with both feet and our eyes wide open.
To get things started, I am going to suggest a name change that better reflects the elusiveness of our unending quest to better understand, quantify, and describe the learning that takes place during a school year.
Instead of the term “year of learning” and all of the George Bailey-sized baggage associated with it, I recommend that we use the term “Yearning” – a portmanteau of the words year and learning.
Plus, it contains the word “earning” which should please the econometricians whom I hold responsible for exacerbating and perpetuating, if not creating, this “year of learning” mess.
The intervention decreased yearning by 3 months – moving us that much closer to our urgent longing for equity, achievement, and justice for all.
Foolish, perhaps.
But no more foolish than what we are doing now.
Let’s do better.
Header image by Gerd Altmann from Pixabay

You must be logged in to post a comment.