assessment, accountability, and other important stuff

Archive for the ‘Uncategorized’ Category

Bring back valid tests


Charlie DePascale

Those of us of a certain age can recall when it was acceptable to talk about valid tests.  One could make the claim that a test was valid; or more often as a precocious graduate student, question whether a test was valid. Some version of the phrase a test is valid to the extent that it measures what it purports to measure rolled off our tongues and made its way into literally every paper we wrote, presentation we made, or late-night conversation we had about tests and testing.

But then things changed.

It was no longer acceptable to talk about valid tests.  Validity was not a property of the test.  Validity was “an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment” (Messick, 1989, p. 13, emphasis in original).

This change, of course, did not happen overnight.  It was the culmination of decades of debate about the meaning of validity and validation.  As with any transition in thinking, the field tolerated references to “valid tests” awhile, long after the focus of validation had shifted to interpretation, inferences, and actions.  At some point, however, the use of the term became socially unacceptable.

Validity – New and Improved

In general, there are three primary reasons why you would change the definition of, or the way we talk about, a concept as central to a field as validity is to educational measurement and assessment.

  • There is new information or understanding that renders the existing definition obsolete.
  • There is a desire or need to clarify the existing definition so that the concept is better understood by theorists and practitioners within the field.
  • There is a desire or need to clarify the existing definition so that it results in better understanding and actions by the broader public.

With regard to validity and educational measurement, one can make a stronger case for the second argument (clarity within the field) than the first case (new information).  Beyond the appeal of a unified theory of validity within the field, however, the third reason, promoting better test use, was the driving force behind the repackaging of validity.  As Shepard (2016) clearly describes, “the reshaping of the definition of validity, beginning in the 1970s and continuing through to the 1985 and 1999 versions of the Standards, came about in the context of the civil rights movement because tests were being used to sort and segregate in harmful ways that could not be defended as valid.”

Shepard (2016) and Cronbach (1988) explain that validity and validation are concepts that must be understood and applied appropriately by the vast body of test users to their vast array of test uses.  Cronbach argues that validators of the appropriateness of tests meet their responsibility “through activities that clarify for a relevant community what a measurement means, and the limitations of each interpretation.  The key word is ‘clarify.’ “

Validity – Clarified?

We are now in at least our sixth decade of clarifying the concept of validity.  Over the last 30 years, successive versions of the Joint Standards have been consistent in both the importance of the concept of validity and in its definition:

Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores. (1985)

Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by the proposed uses of the tests. Validity, therefore, is the most fundamental consideration in developing and evaluating tests. (1999)

Validity refers to the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests.  Validity is, therefore, the most fundamental consideration in developing and evaluating tests.  (2014)

The 1999 and 2014 Joint Standards further state,

“The process of validation involves accumulating relevant evidence to provide a sound scientific basis for proposed test score interpretations. It is the interpretations of test scores for proposed uses that are evaluated, not the test itself.”

As scientists, researchers, evaluators, or interested observers of educational measurement and test use we must ask the question, has this definition of validity, largely unchanged for 30 years, clarified the concept of validity and resulted in better and more appropriate test use by policy makers and by the general public.  My answer to that question is simply to evoke all of the validity issues associated with the phrase Test-based Accountability.

So, if the current definition of validity has not been as successful as we would have liked with end users of assessment, has the focus on a unified theory of validity, validity arguments and “an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment” clarified the concepts of validity and validation within the field?

Well, not so much.  If this is what clarity looks like, then I shudder to think of how things would look with a lack of clarity.

  • Yes, there is consensus on the importance of collecting evidence to support proposed test score interpretations and uses. What constitutes evidence, how much of it is necessary, how it should be evaluated, and who is responsible for collecting, evaluating, and presenting that evidence is another story.
  • Technical concepts such as constructs, construct under-representation, and construct-irrelevant variance are crossed with commonly used words such as alignment and comparability to create tables of test evaluation criteria that have no meaning to the field or the general test user.
  • The shift away from external criteria and predictions has had the unintended consequence of increasing acceptance of the test score as the construct. There can be no give-and-take between theory and evidence from the assessment if the test score is the only evidence available or considered.
  • Although Cronbach wrote of the responsibility of validators, it is no longer clear who those validators are. If everyone is responsible for validation, nobody takes responsibility for validation.  If validation is an ongoing (i.e., unending) process, we can put off too easily some of those validation studies for another year, or two, or three until we have time to gather good data about how the tests scores are being interpreted and used.
  • And even if there is consensus that validity is a unitary concept, the evidence needed to develop a validity argument is still being produced largely by specialists responsible for one distinct aspect of the testing process. Who within the field as it is currently being molded is being equipped with the knowledge and skills necessary for crafting and carrying out a comprehensive and coherent validity plan?

Back to the drawing board

As shown above, there was virtually no change in the definition of validity presented in the 1999 and 2014 Joint Standards and little change in the 30+ years since the 1985 version of the Joint Standards.  If our current definition of validity has not produced desirable outcomes, we have a professional responsibility to continue to review and refine it.

Just for fun, let’s return to our old friend:

Validity is the extent to which a test measures what it purports to measure.

What were the problems with that definition of validity?

A core criticism, of course, was that the definition did not address test use explicitly.  Outside of the thin air and ivy-covered walls (or perhaps some other green plant-covered walls) of academia, however, is that really a problem?  As the examples provided by Shepard (2016) demonstrate, with little help from the measurement community, the courts have certainly interpreted the definition of validity broadly enough to include test use since the 1970s.

Can we expect educators and policy makers to interpret “what it purports to measure” as “what you intend to measure” or “what you purport it measures”?  I would like to think that is the case, but if not, that is a relatively simple fix to the definition.

A second criticism is that the traditional definition is too narrow or too simple.  It does not reflect the complexity of the concept of validity and does not accurately reflect the concept of educational testing (i.e., that the test is being administered for a particular purpose).  That is a fair concern.  On the other hand, we can all agree that concepts of validity and validation are far too complex to be captured by any reasonable expansion of the definition.  What do we lose by trying to make a definition capture more than can possibly be captured in a simple definition?

There is also scientific elegance in the phrase “validity is the extent to which a test measures what it purports to measure.” In particular, the word purport seems uniquely suitable to describe the concept of validity or the process of validation.  There is an air of skepticism associated with the word purport.  Purport refers to claims made without proof or supporting evidence.  What better way to define validity and validation than to suggest that one must assemble evidence to prove that a test measures what it claims to measure – or what you intend to measure with it?

Would a reasonable group of experts and test users assigned with the task of describing how to prove that a test measures what it purports to measure arrive at the same set of validity standards contained in the Joint Standards; clusters of standards related to establishing the intended uses and interpretations of the test, issues regarding samples and settings used in validation, and specific forms of evidence?

A Brave New World

Somebody somewhere is probably already working on the validity/validation chapter for the fifth edition of Educational Measurement (1951, 1971, 1989, 2006).  Unlike the Joint Standards, which by their very nature must lag a generation or two, the validity/validation chapter in Educational Measurement has the opportunity to move the field forward. I hope the author takes advantage of that opportunity.

We are on the cusp of a brave new world in education, assessment, and educational measurement.  It is a world defined by technology, personalization, and an emphasis on individual growth.  It is a world without standardization and fixed form, census, large-scale assessment. It is a world in which educational measurement (and its principles of validity, reliability, and fairness) is threatened by statistical modeling. (Yes, there is a difference between the measurement and modeling.)

For educational measurement to have any hope of survival, we need to begin by being able to clearly convey what we mean by validity and validation and why they are important.  For my money, I will start with a test that measures what it purports to measure.

Implausible Values

Equating in the early part of the 21st century

Charlie DePascale


Our field is facing a crisis brought on by implausible values. The values which threaten us, however, are not the assessment results questioned above.  Those are only the byproduct of the values which our field and society have adopted with regard to K-12 large-scale assessment.

That is, the values which lead us to wait more than a year for the results of the 2017 NAEP Reading and Mathematics tests while expecting nearly instantaneous, real-time results from state assessments like Smarter Balanced, PARCC, and the custom assessments administered by states across the country.

NAEP, the “nation’s report card,” is an assessment program that does not report results at the individual student, school, or even district level (in most cases). NAEP results have no direct consequences for students, teachers, or school administrators.

State assessment results for individual students are sent home to parents.  School and district results are reported and analyzed by local media and may have very real consequences for teachers, administrators, and central office staff.

It is a paradox that equating for annual state assessment programs with a dozen tests and multiple forms is often carried out within a week while results for the four NAEP tests administered every other year can be delayed indefinitely with the explanation that

“Extensive research, with careful, detailed, sophisticated analyses by national experts about how the digital transition affects comparisons to previous results is ongoing and has not yet concluded,” 

Of course, it is precisely because NAEP results have no real consequences or use that we are willing to wait patiently, or disinterestedly, until they are released.  Can anyone imagine a state department of education posting a Twitter poll such as this?

naep 2017

The reality, however, is that the NAEP model is much closer to the time and care that should be devoted to equating (or linking) large-scale state assessment results across forms and years than current best practices with state assessments.

Everything you wanted to know about equating but were afraid to ask

To a large extent, equating is still one of the black boxes of large-scale assessment.  It is that thing that the psycho-magicians do so that we can claim with confidence that results are comparable from one year to the next – not to mention valid, reliable, and fair.

Well, let’s take a quick peek inside the black box.

There are two distinct parts to equating – the technical part and the conceptual/theoretical part.

In reality, the technical part is pretty straightforward; at most it is a DOK Level 2 task.  There are pre-determined procedures to follow, most of which can be automated. It’s so simple that even a music major from a liberal arts college can pick it up pretty quickly (self-reference). That’s what makes it possible to “equate” dozens of test forms in a week; or made it possible for a former state department psychometrician to boast that he conducted 500 equatings per year.

Unfortunately, the technical part leaves you few options when the results just don’t make any sense.

That brings us to the conceptual and theoretical part of equating, which involves few, if any, complicated equations, but is the much more complex part of equating.

As a starting point, it’s important that we don’t confuse the concepts and theory behind the technical aspects of equating with the theoretical part of equating.  That’s a rookie mistake or a veteran diversion.

The concepts and theories that should concern us are those related to how students will perform on two different test forms or sets of items; or on the same test form taken on two different devices; or on a test form administered with accommodations; or on a test form translated into another language or adapted into plain English; or on test forms administered under different testing conditions; or on test forms administered at different times of the year or at different points in the instructional cycle.  The list goes on and on.

Unfortunately, we know a lot less about each and every one of those condititions than we do about the technical aspects of equating.

In the past, our go to solution was to try to develop test forms that required as little equating as possible. That approach, sadly, is no longer viable.  We have now moved beyond equating test forms to applying procedures to add new items to a large item pool; that is, to place the items on a “common scale” with the other items in the pool.

It was also tremendously helpful that in the past we didn’t really expect any change in performance at the state level from one year to the next.  That is, we had a known target, or a fixed point, against which to compare the results of our equating.  If the state average moved more than a slight wobble, we went back to find the problem in our analyses.  It was a simpler time.

Where we go from here

We cannot return to that simpler time, but neither can we abandon some if its basic principles.

When developing new technology, as we are doing now with large-scale and personalized assessment, it is important to have a known target against which to evaluate our results.  When MIT professor Harold ‘Doc’ Edgerton was testing underwater SONAR systems, he is quoted as saying edgertonthat one of the advantages of testing the systems in the Boston Harbor was that the tunnels submerged in the harbor didn’t move.  He knew where they were and they were always in the same place.

We need the education equivalent of a harbor tunnel against which to evaluate our beliefs, theories, procedures, and results.  We are now in a situation where the amount of change in student performance that has occurred from one year to the next is determined solely by equating.  There is no way to verify (of falsify) equating results outside of the system.  That is not a good position from which to operate a high-stakes assessment program; particularly at a time when so many key components of the system are in transition.

Finding such a fixed target is not impossible, but it is not something that can be done on the fly. We cannot continue to move from operational test to operational test.

Our current model of test development for state assessment programs rarely includes any opportunity for pilot testing.  That has to change.

We need to rely less on the technical aspects of equating and invest more in understanding the concept of equating.

We need a better understanding of student learning, student performance, and how student performance changes over time before we build our assessments and equating models.

We need to be humble and acknowledge our limitations.  A certain degree of uncertainty is not a bad thing, if its presence is understood.

Finally, we need to move beyond the point where whenever I think about equating, this scene from Apollo 13 immediately comes to mind.

My 12 Memories of Christmas

Charlie DePascale


As time goes by, certain memories of Christmases past become stronger than others.  Most are filled with family, food, music, and fun; but a few other things manage to creep in as well.  On Christmas 2017, here is a list of my 12 memories that mean Christmas to me.

  1. The First Noel 

My sister, 2 years younger than me, was at a stage where she recognized letters but could not yet read.  One Sunday afternoon before Christmas, my father was at the sink washing dishes and my sister was in the pantry looking at the box of Christmas cookies with the word NOEL written across the package.  She asks my father, “What does N-O-E-L spell?”.  She hears his response “No L” and responds “Yes, there is an L”  After 5 minutes of my father trying to explain, my sister becoming increasingly frustrated, and me laughing hysterically, I knew that Christmas always was going to be one of my favorite holidays.

  1. The Meaning of Christmas –

Beginning with Rudolph (1964) and Charlie Brown (1965) , followed by the Grinch (1966) and Santa Claus is Coming to Town (1970), I learned the “specials” meaning of Christmas  –

There’s always tomorrow for dreams to come true, believe in your dreams come what may

Maybe Christmas, he thought… doesn’t come from a store.  Maybe Christmas, perhaps… means a little bit more!

Christmas Day is in our grasp! So long as we have hands to clasp!

You put one foot in front of the other, And soon you’ll be walking ‘cross the floor. You put one foot in front of the other, And soon you’ll be walking out the door


And, of course.



And just in time in 1974, along came The Year Without A Santa Claus with the Heat Miser, Cold Miser and the advice to “just believe in Santa Claus like you believe in love.”


  1. Smile and say Santa

It wouldn’t be Christmas, of course, without the photo card of me and my sister (and later the dog).  In the days before Shutterfly, digital cameras, and even 1-hour photo that meant family “photo shoots” beginning in late summer or early fall to ensure a good picture.  Most years that involved bring a few Christmas decorations up from the basement and dressing us in nice clothes.  One year, my Mom decided that an outside shot would be nice.  So, in August we were hanging tinsel and decorations on the evergreen tree in my aunt’s front yard and donning our winter coats.




  1. 1968

In 1967, I was aware of the space program and the Red Sox.  In 1968, the rest of the outside world came crashing through the door.  From the Pueblo to the Tet Offensive to Martin Luther King to Bobby Kennedy to the Democratic Convention to the protest at the Olympics to the election of Richard Nixon everything seemed to be spinning out of control.  And then came Apollo 8 and its Christmas Eve broadcast while orbiting the moon.


  1. Grand Funk Railroad – We’re an American Band

Throughout my childhood the annual Smyth family Christmas party brought together three generations of my mother’s side of the family. Grandparent, aunts and uncles, and most of our gaggle of 17 cousins gathered for an afternoon of food, games with Christmas-themed trinkets as prizes, Christmas music, etc. In the mid-1970s one of my older teenage cousins decided to replace the Christmas album on the record player with a pretty yellow album and music I had never heard before.  A few minutes later, parents entered from the kitchen, the album was flying across the room toward the wall, and our annual Christmas parties were no more.  I still use We’re American Band as the song on my alarm clock.



  1. Here We Come a Caroling

From 1971 through 1977, I attended Boston Latin School – the oldest public school in the country and a place steeped in tradition.  One of the more recent, informal, and beloved informal traditions was members of the senior class singing Christmas Carols on the balcony overhanging the cafeteria during lunches on the day before Christmas vacation.  Nothing, even traditions, however, lasts forever. In 1972, the school became coed.  Sometime in that period, they took away the one minute of silent meditation and reflection at the beginning of the day.  In my senior year, they told us Christmas carols were no longer allowed in school and administrators and faculty did everything they could to thwart their singing.  Growing up in Boston, I had not realized that Christmas was controversial.

  1. Our 2nd Christmas Together

My wife, Lisa, and I were married in September 1984, and our first two years of married life were spent in Minnesota while I completed coursework for PhD program.  This meant leaving our apartment in Minneapolis in mid-December to return “home” to Boston for Christmas. The first year, we decided not to decorate the apartment because we would not be there for Christmas.  A bad idea. The second year, around Thanksgiving we headed to the Goodwill store up the street, picked up a silver aluminum tree, and all of the boxes of old ornaments that we could get for $25 (and could fit into our Sentra).  We had our first Christmas tree and have had one every year since then (although never silver again).  Upon returning to Minneapolis after Christmas break, we donated the tree and ornaments back to Goodwill.

  1. The Brown Box

Lisa is one of those lovely people with the tremendously annoying ability to pick up a wrapped present and know immediately what it is.  One year, my parents were determined to stymie her.  They bought her a sewing box from Italy made of beautiful brown wood.  It looked like they had her.  As she removed the wrapping paper and stared at the brown cardboard box with no clue what it contained, she reported what she saw – a brown box.  My Dad, mistakenly thinking that she had guessed that it was the “brown sewing box” was beside himself, gave away the surprise, hilarity ensued, and an annual Christmas story was born.

  1. Christmas Eve (part 1)

Everywhere else, December 24 was Christmas Eve.  For our family, however, it was first and foremost my grandfather’s birthday.  That meant a wonderful Italian dinner followed by an evening with his entire family (see #5 above) filling the two-family home we shared with them.  After lots of food, laughter, and probably just a little drinking, the evening invariably ended in our living room with my sister at the piano and my uncles leading the singing of Christmas carols and old standards from the 1930s, 40s, and 50s.

  1. Christmas Eve (part 2)

After my grandfather’s passing, Christmas Eve became a night to celebrate with my wife’s family.  That became a bit more complicated when her sister married a “nice Jewish boy from Long Island” and they were raising their children in that faith.  In the spirit of family peace and harmony, the compromise was “Chanukah presents”, wrapped in a limited variety of blue Chanukah paper (buy as many rolls as you can when you see it), and place carefully under the Christmas tree.  Nothing confusing there for a child, right?  One year, when my niece was around 4 years old, I asked her what all the presents were for.  She looked up and replied matter-of-factly, “After dinner.”

  1. Dashing Through The Snow

In 2002, our Christmas Day visit to my parents’ house was threatened by an impending snowstorm.  The storm was expected to start by late morning and dump about a foot of snow on us in Southern Maine.  Christmas was canceled this year.  Not so fast, in the spirit of Rudolph, we loaded up the sleigh (car with presents) right after breakfast and started out on the 90-minute drive to my parents.  We called my parents from the car as we were approaching their house (during which call, my Mom told us to hold on someone was pulling into their driveway).  They got to give Christmas presents to their granddaughter in a whirlwind visit (which was not quite as much of a whirlwind as it should have been), and we made it almost all the way back home to Maine before the roads were covered with snow.

  1. Our child is born



In December 1993, we were preparing for the birth of our first child – who was due in late January.  Mary had other plans and was born on December 15.  Starting out a just a bit under 4 pounds, she spent the first ten days of her life in the hospital in an isolette (or baby incubator).  When we arrived at the hospital on Christmas morning, however, she was out of the isolette for the first time and dressed in a bright green Christmas onesie.


We spent that day holding our Christmas miracle and starting a whole new set of Christmas memories with her.  To my staples of Rudolph, Charlie Brown, etc. were added Elmo Saves Christmas, Arthur’s Perfect Christmas, Elf, and The Polar Express.

So, as the song says ….


Have yourself a merry little Christmas
Let your heart be light
From now on, our troubles will be out of sight
Have yourself a merry little Christmas
Make the Yuletide gay
From now on, our troubles will be miles away
Here we are as in olden days
Happy golden days of yore
Faithful friends who are dear to us
Gather near to us once more
Through the years we all will be together
If the fates allow
So hang a shining star upon the highest bough
And have yourself a merry little Christmas now

The Physics of Psychometrics

Charlie DePascale

A metaphysical tale of the past, present, and future relationship between psychometrics and educational assessment.


A cautionary tale


Charlie DePascale

Earlier this month I traveled to Lawrence, Kansas to attend the NCME special conference on the confluence of classroom assessment and large-scale psychometrics. In a panel discussion titled, “I’ve a feeling we’re not in Kansas anymore” Kristen Huff, Karen Barton, Paul Nichols, and I shared the perspective that when bringing together classroom assessment and large-scale psychometrics, co-existence might be a better goal than confluence.  Our goal  was to engage in a critical discussion about those differences between the purposes and uses of classroom and large-scale assessment that give each unique measurement and psychometric needs. To accompany that discussion, we offered the following look over the rainbow at what might result if we blindly attempt to apply large-scale psychometrics in the classroom.


Gillikin County Schools Embrace Classroom Psychometrics

Emerald City –  (September 13, 2017)

As a new school year opens, teachers and administrators in Gillikin County, just north of Emerald City, are clicking their heels with excitement over the impact of their use of a new process called Classroom Psychometrics. Some call it science, others magic, but all agree that it has changed the way that they look at teaching and at their students.

Superintendent Glinda Marvel describes how the district made the leap to classroom psychometrics

Our teachers have been hooked on data since we introduced TestWiz ™ back at the turn of the century.  Over time, however, they found that the data wizard just did not provide them with enough information – did not let them see what was going on behind the curtain.  Then last year, this new computerized adaptive test (CAT) system – TOTO is dropped in our laps.  Now that’s a horse of a different color.

At first, all of the thetas and sigmas were just Greek to me. And many of our teachers were fearful.  They didn’t understand how test questions that discriminate and just getting rid of the misfits could be a good thing.  But then we saw that with TOTO none of our little munchkins ever have to get a question wrong.  The little lights on their individual dashboards are always green.

S. C. Strawman, doctor of thinkology and president of Emerald Associates, explains that the beauty of classroom psychometrics is that it untangles the messy webs one often encounters with learning progressions and popular learning map models – “once teachers realize that the manifest monotonicity of classroom psychometrics is their students’ manifest destiny not even a horde of flying monkeys can pull them off course.”

Or as Castle High School teacher, Almira Gulch put it, “understanding that everything can be explained by just one dimension made our lives so much easier.”  And principal Frank Marvel added, “accepting that there will be some kids who just don’t fit the model no matter what we do, took so much pressure off of our teachers.”

Of course, not everyone is sold on TOTO and classroom psychometrics. Dorothy Gale, longtime school board member, summed it up this way

Well…I think that it’s just that … Sometimes we go looking over the rainbow for the answer when it was really right there in our back yard all along.  Classroom Psychometrics really isn’t telling us anything new about our classrooms and students.  We just needed a little help seeing what was already there right in front of our noses.

Remember the Alamo


The Alamo


Charlie DePascale

This spring  I returned to San Antonio to attend the 2017 NCME conference.  The trip brought back memories of my many visits to the Harcourt office there as a member of the MCAS management team for the Massachusetts Department of Education. My last MCAS trip was in August 2002.  Some things in San Antonio were exactly as I remembered them fifteen years ago, others had changed a little, and some are now merely memories.

HEM Badge

Harcourt, of course, is gone.  Schilo’s still has the best root beer I’ve ever tasted.  The Alamo, is still nestled in the midst of office buildings, souvenir shops, and tourist “museums”.  I still would have found the convention center with the street map I used in 2002 (no smart phone back then).  However, unlike the Alamo, the convention center had moved just a bit from where I remembered it being in 2002.

Like Harcourt, the original MCAS program is gone; finally replaced this spring by the next generation MCAS tests. But, what about assessment and accountability, in general?  How have they changed since the summer of 2002?  What is the same, what has changed a little, and what is now a memory?

Still crazy after all these years

Like the Alamo, much of the facade of assessment and accountability remains unchanged.

  • State content standards, achievement standards, annual testing, and accountability ratings are still the foundation of our work.  
  • Individual state assessments and achievement standards are still the norm despite the best efforts of a lot of earnest folks.
  • NAEP and its trends are still the gold standard. The nation’s report card appeared to be teetering on the brink of irrelevancy back in 2009 with the advent of the Common Core and state assessment consortia.  Like the cockroach and Twinkies, however, it appears that NAEP is impervious to any kind of attack.

What’s new

NCLB has come and gone, but its effects on state assessment and accountability remain. Technology has also had an impact on how we test, and to a lesser extent for the time being, what we test.

  • Annual testing of all students at grades 3 through 8 remains in place – the primary legacy of NCLB.  

We have not yet come up with a good reason for testing all students in English language arts and mathematics every year, but there is surprisingly strong support for doing so. No, telling parents the truth and being able to compute growth scores are not good reasons for administering state assessments to every student every year.

  • Growth as a complement to status and improvement – I place growth scores as a close second to the annual testing of all students when considering the legacy of NCLB; but second and not first only because annual testing was a sine qua non for growth scores.  

Despite all of the value they add to the interpretation of student and school performance, I remain skeptical about our use of growth scores. Much like the original ‘1% cap’ on alternate assessments, growth scores emerged as a means to include more students in the numerator when computing the “percent proficient” for NCLB.  Yes, there are sound reasons for giving schools credit for growth, but growth scores would have died a quick death if they could not have been used to help schools meet their AMO.

I am worried about what I have referred to as the slippery slope of growth; that is, the use of growth scores as the 21st century approach to accepting different expectations for different groups of students.

I am worried about the use of growth scores as another example of our predilection for treating the symptom rather than the disease.  People are not fit – take a pill to lose weight.  College debt is too high – improve student loans.  Students have no reasonable chance to master grade level standards within a school year – compute growth scores.

  • Online testing (née computer-based testing) replacing paper-and-pencil testing.  For years, it appeared that online testing was the proverbial carrot being dangled in front of the horse – always just a bit out of our reach.  Now, however, it appears that we have finally reached a tipping point.  Although there are still glitches, in states across the country the infrastructure is in place to support large-scale online testing.

Online testing itself, of course, is only one way in which technology has impacted large-scale assessment.  Figuring out the best way to make use of things like automated scoring, student information systems, adaptive testing, dynamic reporting, and the wealth of process data available from an online testing session should keep the next generation of testing and measurement specialists quite busy for years to come.

  • Communication across states There is now constant communication among states, and that communication occurs at multiple levels (commissioners/deputy commissioners, assessment directors, content specialists, etc.).  Some might argue that there is often more and better communication within levels across states than across levels within a state, but let’s save that discussion for another day.

The increased communication across states can be attributed to the common requirements and pain of NCLB, technology, the assessment consortia, or all of the above.  Cross-state communication, however, deserves its own place in any list of things that are new since 2002.   The bottom line is that although it may not have resulted in common assessments to the extent expected, increased communication across states has changed how we think about and how we do state assessment and accountability.

Gone, but not forgotten (I hope)

Nothing lasts forever (except NAEP trend lines, see above), but there are a few things that faded away or simply disappeared over the last fifteen years that surprised me.

  • District accountability – It feels as if there is a lot less of a focus on district accountability as something distinct from school accountability now than there was in 2002; even after NCLB was first enacted.   

District report cards are often simply aggregated school report cards, with districts evaluated on the same metrics and indicators as schools.  Although technology and laws/regulations have made it much easier for states to interact directly with schools, there is something critical in the hierarchy among states, districts, and schools that must be maintained.  Perhaps as the accountability pendulum swings slightly back from an exclusive focus on outcomes (i.e., test scores), the impact of district inputs, processes, and procedures on student performance will receive greater attention.

  • Standardization – So critical to large-scale assessment that it was actually part of the name (i.e., “standardized testing”), standardization is dead; and it will not be returning any time soon.  To some extent, standardization, in general, was a casualty of the backlash against traditional “fill in the bubble tests” and test-based accountability, but that is only part of the story.  

We (i.e., the assessment and measurement field) gave up standardization in administration conditions in stages.  Time limits were abandoned in the name of proficiency, “standards-based” education, and allowing kids to show what they could do  (don’t ask me to explain how that made sense).  Standardized administration windows were abandoned to accommodate the use of technology.  Concerns about validity and the appropriateness of accommodations for students with disabilities and English language learners gave way to “accommodations” for all in the name of equity and fairness.

We were never truly invested in standardization of content for individual students, so we gave that up willingly as it allowed us to play with measurement toys like adaptive testing, matrix sampling, and “extreme equating” that is worthy of the name.

As for standardization of scoring, well, if we limit the concept of scoring to mean the scoring of an item then as soon as we moved away from machine-scored, multiple-choice items, standardization of scoring took a hit.  The larger concept of scoring, however, is a bit too complicated to address adequately near the end of a post such as this.  For now, let’s just end our discussion of standardization of scoring with the question that has been posed in countless stories and songs, can you lose something that you never really had?

  • Time – I believe that the single most important change in large-scale assessment since 2002 may be the loss of time available to design, develop, and implement an assessment program.  

The RFP for the original MCAS program was issued in 1994; after some delay the contract was awarded in 1995; and the first operational MCAS tests were administered in spring 1998.  As new tests were added to the MCAS program, it was the norm for a test to be introduced via a full-scale pilot administration one year prior to the first operational administration.  

In contrast, the RFP for the next generation MCAS tests was issued in March 2016; the contract was awarded in August 2016; and the first operational MCAS tests were administered in spring 2017.

And Massachusetts is not alone.  In states across the country, it is becoming normal practice to go from RFP to operational test in less than a year.  This would not be as much of a problem if states were simply purchasing ready-made, commercial tests that had been carefully constructed and validated for use in the state’s particular context.  For most state assessments, however, that is not the case.

Four years from initial design to implementation may or may not be too long, but there is no question that 10 months is too short.  And I don’t want to hear “the perfect is the enemy of the good” or “building the plane while we are flying it” as arguments for minimizing the test development cycle.  First, those concepts don’t really apply to things like planes (and high-stakes assessments).  Second,  they do not apply in one-sided relationships in which mistakes are met with fines, lawsuits, and loss of contracts. Third, what’s the rush?

Where do we go from here?

I identified the loss of time as the most significant change not so much because of its negative impact on the quality of state assessments, but rather because of what it signifies.  The requirement to design, develop, and implement custom assessments in less than a year is a clear indication that the measurement and assessment community may have lost what little control or influence we had over assessment policy.  Recent trends in the area of comparability is another example.  We are more likely to be asked after the fact  “how to make things comparable” or to “show that they are comparable” than to be asked to offer input in advance on “whether an action will produce results that are comparable” To paraphrase an old expression, policymakers find it much easier to ask the measurement community for justification or a rationalization than permission.

In part, this may be a reflection of policymakers’  facing many constraints and limited options. In part, however, I believe that we have ourselves to blame; we are a victim of our success or at least of our own public relations machine.  Psychometricians can do anything with test scores!  Unfortunately, the field cannot survive as a scientific discipline by simply saying yes to every request that is made, regardless of how unrealistic or unreasonable that request may be.  (Note that I am making a distinction between testing companies surviving and the assessment/measurement field surviving.)

The fate of the field, however, may have already been sealed by the emergence of Big Data. Only time will tell.  

I will meet you at Schilo’s in 2032 and we can discuss it over a frosty mug of root beer. The first round is on me.




If it’s Tuesday, this must be …

Charlie DePascale



12 days, 3 conferences: PowerPoint presentations, posters, uncomfortable chairs, and a few random thoughts.

  1. Conference presentations are an art form: whether it’s a keynote address, a 15-minute research presentation, an “electronic board” or a poster a good presentation must tell a story, make a point, and deliver a message. A picture can be worth thousands of words.  Thousands of words on a slide are not worth quite as much.


  1. Long breaks between sessions are nice. Finding the right break:session ratio can make or break a conference.


  1. With the right speakers and topics, two days of plenary sessions can be magical.


  1. It can be good to put yourself in situations where you’re the oldest person in the room for two days. (but you know, not in a creepy way)


  1. I hate effect size. It may have been an improvement over reporting simple significance levels, but that’s not enough.  You still have to be able to explain the impact that your treatment will have.  Even dreaded econometricians tell me that a certain treatment will result in a student earning $5 a week more for the next 40 years.  I don’t believe any of it, but I appreciate the effort.


  1. Every conference should have a few sessions where we share all of the things that didn’t work as planned, describe all of the treatments that had absolutely no impact at all. We can learn so much from things that didn’t work.


  1. I miss overhead transparencies. Sure they were messy and limited, but they seemed to breed spontaneity.  Pretty much every session in the transparency era included somebody grabbing a blank transparency and a marker to rebut or support a presenter. That just doesn’t happen with PowerPoint.


  1. How about a conference with “anything can happen Thursday”? Imagine a session on Thursday afternoon with a block of concurrent sessions, but no descriptions.  You don’t know what the topic is or who the presenters will be.  You just pick a room, sit back, and engage with the presentations.  The downside is minimal, and you may learn something new.


  1. Researchers have to ask big, important questions; or at least they have to know and understand the big important questions. Even if your project is to make a better widget, you should have some idea how the better widget might be used.


  1. As Seen on TV: Nothing ever works the same way at home as it did on TV.  That’s the feeling I get with many presentations.  This treatment will work with high quality instruction, appropriate administrative support, sufficient training, proper implementation, and kids who want to learn.  And the cold remedy will work if taken for 10 days with lots of fluids and plenty of rest.


And why is it so hard to get a comfortable temperature in hotel conference rooms?