What Do We Want From AI?

While scrolling through LinkedIn last week, within a short period of time I came across three posts that left me wondering about what that future with AI in education might look like if we had our druthers. Two were posts by long-time friends and colleagues, well-respected leaders in our field. The third was a post by someone whose name I didn’t recognize, because well, I think that’s the way LinkedIn is supposed to work.

The first was a post by Kristen Huff of Curriculum Associates following the SXSW conference.  Kristen posted:

Compelled to amplify this spot-on quote from my colleague Amelia Kelly.
“If AI technology doesn’t work equally well for all kids, then it does not work.”

After watching the short video that accompanied the original post, I found myself nodding in agreement, as is the norm after reading Kristen’s posts.

Next I came across a post from Juan D’Brot from the Center for Assessment. As usual, Juan led with a pithy quote:

“Don’t let the perfect be the enemy of the good. – Voltaire”

Now, although on occasion I have seen that quote misappropriated for the purpose of justifying sloppy work by sloths whom I’ll wager dollars to doughnuts have never read a word of Voltaire, in general, I find it difficult to disagree with either Voltaire or Juan.

At that point, I began to wonder how those two thoughts I agreed with fit together. Is holding AI technology to the standard of working equally well for all kids (i.e., the perfect), simply too high? On the other hand, what unintended consequences might there be if we settle for a lesser standard (i.e., the good)?

A short time later, I came across the third post, and there in a bright red box was  the oft-quoted line from Dylan Wiliam’s 2018 book Creating The Schools Our Children Need:

Then, in a flash, it all came together for me. Perhaps we may want AI technology to work equally well for all students, but it’s highly unlikely that we will ever want the same technology working the same way for all students. Some AI-driven or AI-assisted innovations may be designed to support certain students in certain situations – and that is good.

Seeking A Much-Needed Boost From AI Technology

Not too long ago, you couldn’t attend a meeting on state testing without hearing the term “differential boost.” Inclusion and access were all the rage (and the law), and states were scrambling to expand the menu of testing accommodations available to students with disabilities. A key factor in validating an accommodation for use on state tests was “differential boost”; that is, whether the accommodation raised the performance of students with the particular disability for whom the accommodation was intended, but did not have an impact on the performance of other students.

Thankfully, we long ago abandoned the misguided delusion that states should be the ones responsible for validating commonly applied testing accommodations, but the concept of differential boost still applies when considering the application of AI technology to solve what many identify as the number one problem facing public education: reducing achievement gaps.

To eliminate or eradicate achievement gaps we need solutions that ensure equal opportunity to learn, which requires not only access to high-quality instruction, but also access to conditions which support learning. AI technology may be able to deliver on both for students who currently have one, but not the other, or who have neither. We don’t need those AI solutions to work equally well in places where they are not needed. Applying them where they are not needed may, in fact, be detrimental much in the same way that other assistive technologies may hamper performance when given to people who do not need them.

To reduce existing achievement gaps we may benefit from AI technology that helps increase the rate of learning for students who are lagging far behind. Or we may find ways in which AI technology can alleviate the impact of those achievement gaps while they still exist. Once again, that AI technology need not be perfect and need not work equally well for all students. We only need it to work very well for the students for whom it is intended.

All Rarely Means All, Y’all, But You Know What I Mean

I nodded in agreement when I read Amelia Kelly’s statement, “If AI technology doesn’t work equally well for all kids, then it does not work” because I interpreted it as meaning that AI technology has to work equally well for students who have been historically marginalized and denied equal opportunity to learn. Specific AI technology may be intended for a targeted audience, but it cannot systematically exclude certain students who are members of that audience; and if it is less effective for some students than others within that group then we need to continue to work to make it better or to find alternate solutions.

Are there real or perceived dangers to a targeted focus? Sure. No Child Left Behind targeted students performing below “Proficient” in reading and mathematics, which made perfect sense for a program largely housed within Title 1. Some people, however, saw unintended negative consequences. Regulations that held students accountable for the percentage of proficient students could lead schools to place an inordinate focus on the “bubble kids” at the expense of lower performing students. Capping accountability indices at the Proficient level could lead schools to ignore higher performing students.

We have to key an eye on all students, even if a particular AI technology or AI-driven policy is only intended to benefit some students.

The Future Is Bright

If there is one certainty about the future of public education, including educational assessment and accountability, it is that artificial intelligence (AI) will play a significant role in that future. And the future of AI technology in education is bright; or put another way, the future of education looks brighter today thanks in large part to the potential of AI technology to solve wicked problems that we have been unable to solve on our own. But sometimes things can be too bright. At this time of year, I experience that every time I drive up our street in the morning or down our street in the late afternoon. The sun is so bright that I cannot see anything in front of me for about 20-50 yards.

In some ways, we are at that same point regarding the use of AI technology in education. The future is bright, but there are blind spots where we just cannot see what lies ahead. We need guard rails, bright lines, pillars, or principles to help guide us and to help us guide AI. Those quality of those principles and the resulting products will depend in large part to our ability to interpret, evaluate, and synthesize like

  • If AI technology doesn’t’ work equally well for all kids, then it doesn’t work.
  • Don’t let the perfect be the enemy of the good.
  • Everything works somewhere, and nothing works everywhere.

We need to be clear about what we want from a particular piece of AI technology, what we want it to accomplish, how we hope it will be used, by whom, and for whom. We have to anticipate the unanticipated.

I am not suggesting that we have to know all of that in advance. I am not naïve enough to think that we can even begin to imagine the possibilities that AI offers, and I believe that Steve Jobs statement “people don’t know what they want until you show it to them” is probably more true today than it was when he made it.

But when they do show it to us, we have to be ready.

Image by Gerd Altmann from Pixabay

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..