As I read the tweets and posts from the NCME community assembled in Chicago this week, I find myself back here in Maine reminiscing about my five conference trips to Chicago since 1991. There are the memories of discovering deep dish pizza, walking in fascination along the wall of stones (fragments of history) at the Tribune Tower, and climbing flights of stairs to get from a street at one level to the street above with the hotel I was trying to reach.

Those memories include watching my sister give a “hilarious performance” and offer a “standout solo” described as “alone worth the price of a ticket” in her role as a “gum-chewing goofball” in one of her first forays into musical comedy.

I relish the coffee, lunch, and dinner meetings during which my graduate school professors became mentors, colleagues, and in some cases, friends.

I fondly recall navigating the ‘L’ to Wrigley field to take in a Cubs game on a beautiful 70-degree spring afternoon. Somewhat less fondly, I recall fighting the wind while trying to make it up Michigan Ave in the midst of a blizzard with signs warning “DANGER: Watch for Falling Ice” dotting the way. And yes, those occurred on consecutive days.

I have more conference-specific memories, too, of things related to educational measurement and assessment, of course. I’m sure that I do.

For one, this blog, Embrace the Absurd was born to disseminate my presentation at an invited symposium at the 2015 NCME conference.

At some point, that reminiscing shifted to reflecting on the role that NCME played in my life and career over the past three decades.

Eventually, that reflecting broadened into pondering the larger question of the role that NCME has played, does play, and will play in improving educational assessment to better support teaching and student learning.

You can’t spell NCME without ME. Awesome!

 Or can you?

I would love to be able to recount the story of my excitement over my first encounter with NCME and the conference, but that’s one memory I don’t have. In the 1980s, as a graduate student in educational psychology, psychological foundations of education, measurement and evaluation, NCME was simply that thing you signed up for while registering for AERA. For just a few dollars more, there were some additional sessions you could attend. (Because there weren’t already enough sessions jammed into a day at AERA?)

The fact that one of our professors, Jack Merwin, was a past president of NCME didn’t really make an impact. What we were focused on at the time was his invited address to Division D at the 1983 AERA conference: Evaluation. A Profession?

What is the difference between AERA Division D and NCME?

What’s the value-added, as they say, of NCME to someone who self-identifies as a member of AERA Divisions H and K as well as Division D. Or perhaps even those who call Division B or L home.

NCME never really became one of my go-to conferences. You can only have so many.

As a data analyst and assessment professional specializing in large-scale testing, my two conferences of choice for the bulk of my career were the CCSSO Large-Scale Assessment Conference (which later became the National Conference on Student Assessment) and the annual conference of the Northeast SAS Users Group. Those were the places I would go where everybody knew my name.

LSAC/NCSA was a convening of state staff and assessment contractors all working on solving the same challenging problems related to state testing and accountability. That conference is where we introduced the standard setting method that would become the Body of Work Method, debated topics such as the meaning and merits of alignment requirements, whether the flaws in standard setting were, in fact, fundamental, the use and validation of accommodations, and all of the many issues related to the administration and scoring of constructed-response items and essays.

NESUG was a gathering of people from a wide variety of professions working on a wide variety of problems. What we had in common was that we were all using SAS to effectively organize, analyze, and manipulate data so that we could effectively present and communicate information. I never left NESUG without picking up a few tips and tools for doing my job more efficiently.

Attending NCME, on the other hand, was like a trip to Disneyland. You began in Adventureland with a full-day, hands-on, interactive workshop exploring some new technical procedure or tackling a weighty measurement policy question. Then it was on to Frontierland to watch the re-enactment of shootouts over the meaning and measurement of validity. It was the same show every year, but those actors remained oh so committed to their roles. After lunch, a visit to Fantasyland and presentations of studies that showed that if you controlled for every factor that might possibly have any influence at all on achievement that there were no achievement gaps. And finally, you could wrap up the visit with a stop in Tomorrowland to see the innovative methods, multidimensional models, and new software that were being perfected at universities and would soon be integrated into operational testing programs. (Maybe believing that integration would occur was the real Fantasyland.).

I often left NCME with visions of astounding and amazing things dancing in my head, thinking that this may just be the most magical place on earth. But deep down I knew that I shouldn’t expect to see any of it when I returned home, not anytime soon anyway.

Trying to Find a Place in This World

While I was trying to place NCME within my world, it seems that NCME was also struggling to find its own place in the world.

It is never easy walking the tightrope between basic and applied research, trying to satisfy both the LSAC/NCSA applied assessment people like me and the more measurement-focused Psychometric Society folks. But it seems that NCME’s task became even more challenging over the past two decades. As stated on the homepage of the NCME website:

The National Council on Measurement in Education (NCME) is a professional organization for individuals involved in assessment, evaluation, testing, and other aspects of educational measurement. During the past 20 years, the NCME membership has become more diverse, broadening the scope of the organization’s vision. Service to communities and ensuring that assessment is fair and equitable for all students have become essential elements of NCME’s mission and purposes.

The statement starts out broadly defining the scope of the organization and each sentence broadens it further.

The turmoil of the last 5-8 years within and outside of the field has only added to the challenge of a field and an organization struggling to define itself.

In 2015, Mark Wilson reminded us that much of educational measurement is concerned with what takes place in the classroom. That reminder led to a conference series and an article asking the question, “Classroom assessment and large-scale psychometrics: Shall the ‘twain meet?” As in Kipling’s, The Ballad of East and West, the answer seems to be that while classroom assessment and large-scale psychometrics should have a healthy respect for each other and what they do, they are, in fact, two distinct disciplines, designed to serve different purposes and address different questions.  How the two can best coexist within NCME and how NCME can best serve both going forward are still open questions.

In 2020, looking backward and looking forward, Steve Sireci brought equity to the forefront of the mission of NCME as he offered reasons for the public’s apparent increasing distrust of educational measurement and proposed five core values to help educational measurement better serve education.

It’s A Revolution – It’s the Fight of Our Lives – These Things Will Change

The only thing certain about the future of educational measurement is that it will look significantly different from its present and its past. What is measured, who is measured, by whom (if it is still a whom and not a what), how often, and for what purposes are all likely to look very different even by the next time that NCME returns to Chicago.

The array of processes and procedures that fall under the umbrella heading of psychometrics will be very different than those that we use now or have employed in the past – although many of the challenges and pitfalls associated with our current processes and procedures will remain the same.

Not only is educational measurement changing, so too are educational assessment, and educational testing.

It will not be easy to successfully navigate all of this change. As we know, change is something that has always brought educational measurement and assessment to its knees.

More important than Standards or even a consensus definition of validity, it will be critical that NCME operate from a set of core values, whether those are the values proposed by Steve or other values, and that individuals and the organization embrace Derek Briggs 2022 challenge to keep learning and to always have the courage and willingness to ask: Why?

Adventure, Fantasy, and new Frontiers await us in the brave new world of Tomorrow.

Image by Michelle Raponi from Pixabay

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..

%d bloggers like this: