It’s Time To Remember and To Repeat

It’s one of those moments in life that I will never forget. September 1971, the start of seventh grade and my six years at Boston Latin School. Herded into the auditorium for our first assembly. Even before we were allowed to sit and well before we were told to look to our left and to our right, I found myself staring in awe at the larger-than-life names of larger-than-life people inscribed on the “upper frieze” of the auditorium: FRANKLIN, HANCOCK, ADAMS, MATHER, BOWDOIN, EVERETT, BULLFINCH, BERNSTEIN. Names synonymous with Boston and US history. Names of my neighborhood, my elementary school, the streets and squares where I grew up. And as my eyes followed the list around the hall: SANTAYANA. Santa- who?

It was a true “one of these things is not like the others” moment. Well, to be honest, two of these names were not like the others, but even at 12, I knew who Leonard Bernstein was. But who was Santayana? Why was his name up there?  I had to look him up.

In 1971, looking him up meant my first stop was our brand new set of Compton’s Encyclopedia. Eventually, I learned that the name belonged to a Spanish-born and Boston-educated philosopher; and I learned the oft-quoted line from his The Life of Reason:

Those who cannot remember the past are condemned to repeat it.

A decade later in 1981, I encountered that same quote on another “frieze” of sorts as I began my teaching career. It was taped high on the wall at the top of the staircase leading to the history department.

But the past that I want us to remember (and repeat) today if from a decade later, the early 1990s and the early years of my career in large-scale state testing.

It’s difficult for me to grasp that the newly minted PhD and budding psychometrician encountering large-scale state testing was so much closer in age (among other things) to that awestruck seventh grader than to the person scribbling down this post while waiting in a medical center parking garage. But it’s even more difficult for me to wrap my head around the fact that in nearly 40 years, the field has done so little to advance what we thought in 1990 was the only viable solution to large-scale state testing: state-supported, school-based assessment.

Did we fail spectacularly in the early 1990s? Sure, but spectacularly is the only way to fail. Go big or go home.

And I argue in my new paper, Overcoming Barriers to School-based Large-scale Assessment with the Support of Artificial Intelligence Tools, prepared for the 8th International Association for Innovations in Educational Assessment Conference, the time is now to remember, revisit, and repeat our commitment to school-based assessment. However, heeding the words of Santayana, with the help of artificial intelligence tools we should be able to avoid the missteps that doomed our efforts in the 1990s.

In the remainder of this post, I summarize the main points argued in the paper.

State-supported, school-based assessment

As the 1990s began, the field of large-scale state testing was poised to answer the call to transition from its longstanding reliance on commercial, norm-referenced to custom-developed, criterion-referenced tests,

  • to transition from an almost exclusive reliance on multiple-choice items to more authentic forms of assessment,
  • to transition from an emphasis on items requiring recall, rote memorization, and other basic skills to the assessment of higher-order knowledge and skills, and
  • to transition from prioritizing assessment of learning to embracing the concept of assessment for learning.

It was clear at the time that doing so would also require a transition from external, on-demand tests administered at the end of the year to school-based assessment, aligned to the curriculum, and embedded within instruction.  For lack of a better descriptor: state-supported, school-based assessment.

What went wrong?

The measurement challenges to our efforts to implement school-based assessment in the 1990s are well-documented and well-known to most readers of this blog – reliability of scoring, generalizability, comparability to name a few.

However, I think that there were three distinct categories of barriers that we failed to overcome:

  • Measurement/assessment challenges
  • Logistical challenges – Technical Infrastructure
  • Interpersonal challenges – Human Infrastructure.

And when we examine each of these, it is relatively easy for me to conclude that measurement challenges, in fact, were the least of our concerns. Logistical challenges were a formidable barrier, but the technological advances since the 1990s, including those that facilitated the creation of student identifiers and student information management systems, and the transition to computer-based test have helped strengthen the technical infrastructure. What remains as a barrier, however, is the human infrastructure.

Human Infrastructure and School-Based Assessment

Human Infrastructure is the network of people, relationships, knowledge flows, and norms that enable work to actually happen inside an organization. It’s the living connective tissue between strategy and execution and embodies the capacity to align, collaborate, and adapt when technology changes. – Digital Wisdom Collective, 2025

I think that there are four critical stages that must be addressed to strengthen the human infrastructure so that state-supported, school-based assessment can be implemented successfully.

  • Stage 1: Gaining trust and buy-in from all participants and stakeholders
  • Stage 2: Allowing adequate time for acclimation and use
  • Stage 3: Providing continuous support for implementation and use
  • Stage 4: Providing support for the interpretation and use of assessment results

To date, we have fallen short in all four stages when attempting to innovate in assessment.

The first two stages still rely heavily on human, decision-making, policy, and practices. Artificial intelligence tools, however, have the potential to provide critical support that exceeds our human capacity in the final two stages. AI, however, can be useful tool, or partner, in all four stages. Specifically, I envision AI tools contributing to:

  1. U/X Design
  2. Continuous real-time support for educators
  3. Scoring and providing exemplars
  4. Modeling

Modeling, whether based on traditional statistical techniques, data science, or AI will be focused on helping to answer the one question asked by teachers that those of us in assessment have never been able to answer satisfactorily: What do I do next?  

Modeling for Meaning

More than simply incorporating AI into the assessment process, however, we have to acknowledge as a field that it is a critical aim of the assessment process to provide information that helps “bridge the gap” between the sense of where a student is now provided by a test score and the target for where we hope the student to be at the end of a unit, semester, or school year.  And having made that acknowledgement, we have to commit ourselves and our resources to generating and disseminating that information.

Our instinct will be to find ways to use AI to increase efficiency at all phases throughout the assessment process; and I’m sure that AI will be quite useful in improving efficiency. We must not forget, however, that efficiency, although desirable, is not our primary aim when we design and implement assessment programs. We must also remember that, far too often, prioritizing efficiency in assessment has led away from our primary aim. We must remember that aspect of our past, for as Santayana also noted:

Fanaticism consists in redoubling your effort when you have forgotten your aim.

 

 

Image by Gerd Altmann from Pixabay

 

 

 

 

 

 

 

 

 

 

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..