Artificial Intelligence: It’s Only Human

Everywhere I turn there seems to be another article or webinar or news report or tweet about the dangers of artificial intelligence – or AI as it’s affectionately known among friends. On any given day, AI jockeys with the outcome of the 2024 election, the latest decision from the Supreme Court, and Caitlin Clark for the runner-up position behind climate change as our greatest existential threat.

I can easily wrap my head around the concept of AI as an existential question; that is, what it means to be human and to be intelligent.  That’s a rich vein that we can tap for a very long time.

I have real trouble, however, thinking of AI as an existential threat, that is, an extinction level event, or an E.L.E. in the parlance of Deep Impact – the underrated 1998 sci-fi drama about the imminent end of the world that first gave us Morgan Freeman as President of the United States, who by the way remains my top choice if forced to select an octogenarian for president, and the film was also the first dramatic lead for Téa Leoni, who coincidentally, two decades later as Elizabeth McCord, also played the President, one of a long line of fictional female US presidents in popular culture; and for testing buffs, this movie about the end of the world as we know it and all whom we hold precious and dear includes a couple of scenes filmed just down the street from the Georgetown offices of the company formerly known as AIR Assessment, which, as I’m sure you’ll agree, cannot be a coincidence; but now, I’ve digressed from my digression in this remarkably long stream of consciousness run-on sentence with thoughts that could only be strung together by a sentient being with an advanced degree.

Let’s Chat About Artificial Intelligence

The first thought that comes to mind when I hear the term artificial intelligence is, “Crap, we would be in a much better position with artificial intelligence if only we had been able to better define and measure human intelligence at some point in the past 125 years.” I’ll also to admit to a recurring Dickensian nightmare in which automated (but still not animated) psychometricians use intelligence testing in an attempt to prove that machines are more intelligent than humans – karma is a bitch. Note to self: No more limited edition Franks RedHot Goldfish Crackers as a late night snack.

My thoughts then usually turn to how quickly we are to mislabel just about anything that we don’t understand and involves a computer as artificial intelligence. The rush to dub any scoring approach that uses an algorithm as “AI scoring” has to be the prime example in our field.

But are we really regarding computers any differently than we do humans? Don’t we have a tendency to ascribe “intelligence” to any people who are able to talk logically and confidently about things that we ourselves don’t understand.

We do the same for those people who impress us with their musical, artistic, mathematical, interpersonal, or mechanical skills – or intelligences if you want to go down the Howard Gardner path.

It’s only human.

I am not immune. While I can count on one hand (maybe two) the measurement and assessment specialists whom I have regarded as highly intelligent over the years, I am much more likely to be impressed by physicians and physicists, or any physical scientists for that matter, and even some meteorologists and lawyers who can discuss and do things that I don’t fully understand. If familiarity doesn’t breed contempt, it does breed underestimating people and taking them for granted.

If Not “Intelligence” What Are We Afraid Of

So, if it’s not really the intelligence of the computers that we’re worried about, what do we fear?

First thing that comes to mind are the people programming the computers. If not fear, we should always regard them with a healthy skepticism. That’s obvious. Like any other field and group of practitioners, among them are going to be a handful who are truly evil and have malicious intent, a sizeable portion who simply aren’t very good at what they do, and others who are well-intentioned and competent, but may be too far in the weeds to anticipate the unintended negative consequences of what they are doing.

So far, it’s still humans and human intelligence that are the issue.

Digging a bit deeper, I see a few common themes in discussions about the dangers of artificial intelligence:

  1. Algorithms will be misapplied
  2. The predisposition to treat correlation as causation
  3. Use of incomplete models

Once again, all very real, but very human concerns. We are guilty of all of these on a regular, if not daily, basis in our work and in our personal lives.

We didn’t need to worry as much about the misuse of statistical procedures and psychometric processes until any Chris, Leslie, or Charlie could drop any old data into a program and perform an ANOVA, factor analysis, and logistic regression in seconds, or calibrate and item set and equate test forms without thinking about it.

But now we need to worry, or at least we should.

What then is it about computers that make them more dangerous than us?

In a word, efficiency.

Computers Can Do It All Night Long

Computers are fast and they are relentless, and they don’t get bored and are not easily distracted from the task at hand.

All good features, or characteristics or traits if we are into anthropomorphizing, until they aren’t. We don’t trust people who never sleep; why would we trust computers.

Now this is a real problem, but it’s still a human problem. In part, the answer depends on just how much control and “decision-making” we cede to computers.

Will we be too blinded by efficiency to build in time for checks and balances – either exclusively by humans or by humans with computer assistance?

I am always impressed when our local meteorologist explains why he is choosing the European Model over the American Model this time or vice versa; or when experience tells him that both models are missing something important brewing in the atmosphere.

When we have faced economic threats from other societies who were more efficient and perhaps more driven than us, we have been able to buy time and remain on top by introducing them to fast food, sugary soft drinks, and other defining aspects of Western pop culture.  Nothing to be proud of, but highly effective.

I’m not sure that approach will work with computers, but we should be able to build unnecessary redundancies and other inefficiencies into our systems and models. I’m confident that we can excel at such inefficiency if we put our mind to it. We’re certainly good at it unintentionally.  It’s kind of what we do, who we are.

But I still haven’t hit on the real issue, have I?

That deep-seated fear that creeps a little bit closer to the surface every time we watch the Roomba make its way across the floor, or our animated Snoopy watch face seems to know what we are doing, or an ad pops up online for something that we have been talking about over lunch.

What happens when the machines begin to think for themselves?

When the Machines Rise Up

Again, hope is not lost. We can draw on experience.

All of us who have assumed the role as “the person behind the person” have already had to deal with this scenario.

Everything starts out fine. You develop a rapport with the person who much better positioned than you are to deliver the message to the public about assessment, accountability, growth, etc. You understand their goals and the problems they are trying to solve.

You work with them, you may instruct and train, but mainly what you do is to share your expertise with them and through them. You help them take your knowledge, your information, and sometimes even your wisdom to shape their message and inform their decision-making.

It works well for a while, but then they start to go off script now and then. You have to assign a “body person” to follow close by to note what they said and may have promised. A problem, but manageable.

Almost inevitably, you reach the point where they believe that they understand the issues well enough to think for themselves, make decisions on their own, eschew your guidance on assessment, psychometric, and technical issues.

At that point, you can try to rein them back in, but that seldom works. It’s time for you to let go and move on, to choose another battle, to fight another day. You let them function on their own.

Often they crash and burn quickly.

Sometimes there is enough of a safety net in place that the damage isn’t too bad, but not always. In most cases, fortunately, power, control, and influence is so distributed and limited that any negative effects are neither cataclysmic nor longstanding.

Every now and then that appearing “logical and confident to people who don’t really understand” phenomenon kicks in and denouement takes a little longer.

The good news is that there is a wealth of empirical evidence proving that the length of time that this type of “artificial intelligence” is able to survive is inversely proportional to its impact on anything consequential.

And that’s the hook we can hang our hat on.

 

Image by Gerd Altmann from Pixabay

 

 

 

 

Published by Charlie DePascale

Charlie DePascale is an educational consultant specializing in the area of large-scale educational assessment. When absolutely necessary, he is a psychometrician. The ideas expressed in these posts are his (at least at the time they were written), and are not intended to reflect the views of any organizations with which he is affiliated personally or professionally..