Thursday, May 26, 2022

Thoughts about the Turing test

 

Random thought of the day:

I was thinking about the Turing test while driving back to Round Rock from Denton today … perhaps due to the mind numbing news about the massacre in Uvalde. The Turing test was devised by Alan Turing in 1950 as a way to determine if a computer program could be deemed to  be intelligent – see https://en.wikipedia.org/wiki/Turing_test. Joseph Weizenbaum at MIT created a program called Eliza in 1964 that was designed to imitate a Rogerian psychologist. Weizenbaum claimed that if an observer could not distinguish Eliza from an actual human psychologist then one would have to say that Eliza exhibited intelligent behavior. Weizenbaum’s ulterior motive might have been to show that communications between a human and a machine were somewhat superficial – see https://en.wikipedia.org/wiki/ELIZA/ .

In any case, while many argued that Eliza was not sufficient to pass Turing’s test, being an arrogant young researcher at the Air Force Research Laboratory in San Antonio, I thought I could do better. I was responsible for a group led by Dave Merrill in the latter part of the 1980s that had created a program that could generate an effective aircraft maintenance lesson using existing databases in a matter of minutes and I had a lesson created by a human for a similar task. Could folks identify the one created by the program when shown both? Unfortunately, our example failed the Turing test because our program used line art available in the available databases whereas the human designed lesson used much more appealing graphic art to support the lesson which involved removing the radar from an F-16 as best I can recall. The fact that our program took only minutes to generate and was based on the latest model of the F-16 was irrelevant in that failed attempt to pass the Turing test.

Now I am thinking that there should be an alternative to the Turing test. Rather than try to compare a human-generated example from a machine-generated example for a representative task, it makes more sense to me some 36 years later to identify things a typical human can do that the best machine program cannot do. Granted, computers can beat me at chess every time, and some computer programs can even beat chess grand masters, so I do not feel so bad.

However, there are probably things a person can do that a machine cannot come close to match. What are some of those things? In my case, because I am so insecure, I think I can wonder whether I was right about X and revisit my reasoning and alternative evidence and possibly reach a different conclusion. Can a computer program do something like that? Can a computer program doubt its own output, reflect, reconsider, and re-examine things? Perhaps not yet … and then, when that becomes possible, I will wonder whether a computer can laugh at its former response and say how stupid or naïve I was.

My conclusion now is that my insecurity has finally proven worthwhile.

Mike Spector

May 26, 2022

 

No comments:

Post a Comment