Financial Poise

Time to Dump the Turing Test in Light of AI

To say AI is the new “hot thing” would be putting it lightly. Following the successful, astonishing release of ChatGPT, everyone jumped to find applications. It shifts a lot of focus to the so-called “Turing Test.”

Some attempts have been successful. Others have raised eyebrows. But as the market pushes tech stocks ever higher as a reward for their pursuit of more advanced AI models, one has to ask: what could the consequences be if this tech skipped the Uncanny Valley altogether? Should we risk it?

Testing AI with Turing

In 1950, mathematician Alan Turing – often referenced as the father of modern computer science – published research that changed the game. In it, he proposed a test for how to determine whether a machine could display intelligence indistinguishable from that of a human. 

Turing’s test, originally called the “imitation game,” was based on natural language conversations between a human and a machine using text only. The test did not require that the answers submitted to questions made by either party in the conversation be correct. It asked if the answers given could not be used to determine whether the participant giving that answer was a human or a machine.

If a third-party human evaluator was unable to distinguish reliably which participant was the machine and which was the human, then the machine could be said to have passed the test. 

How Close Are We to Passing the Turing Test?

By the standard of Turing’s test, recent AI-based models must be accepted as “intelligent” to at least some extent. Some of the technology’s applications feel innocuous… for now. An excellent example would be the recent segment where a Bloomberg reporter interviewed an AI version of himself. Warning: it may unnerve you.

Yet, for all the convincingly human-like responses which these models give to questions and commands made of them, I do not regard them as “intelligent.” The problem lies not in the capabilities of the models, but rather in the fact that we humans are too easily fooled.

The Turing Test Was Always Fatally Flawed

Turing’s test is akin to arguing that if a stage conjuror is capable of fooling the audience when presenting one of his illusions, then he may be said to have performed “magic.” In the heyday of the amazing acts from some of the all-time greats, many magicians claimed to possess supernatural or spiritual powers as part of their acts. 

Other illusionists like Harry Houdini made a practice of debunking these claims. He specialized in showing that the magical effects were the result of clever trickery and then little-understood modern technologies such as electricity. 

Indeed, in more recent times, the science fiction author Arthur C. Clarke argued, “Any sufficiently advanced technology is indistinguishable from magic.” 

The illusionists of the past fooled many people. This includes some very famous men such as Sir Arthur Conan Doyle, author of the Sherlock Holmes stories, who was willing to believe in the existence of fairies.

AI Blindsides Turing

When we visit a show in Las Vegas today presented by great modern magicians such as David Copperfield or Penn & Teller, we know that it is all an illusion. We pretend it’s magic temporarily because we are willing simply to be entertained. 

Likewise, when confronted by the product of ChatGPT, we may feel amazed. Look at how human-like its output feels! Consider how we may best be able to leverage the new technology in our work! Large language models will certainly prove to have great commercial applications. Just ask Adobe, whose integration of AI capabilities into their creative suite has already made waves.

Like any technology, this technology will get used for purposes both good and ill. Sometimes what starts out as good intent becomes problematic for organizations and downright dangerous to end users. The National Eating Disorder Association had to take its therapeutic AI chatbot “Tessa” down after users reported unrelated and blatantly harmful “advice” from the tech.

But the organization also fired its very human staff and volunteers in favor of the technology. Perhaps that proves the point.

How to Parse the AI Goldrush

We should not let ourselves get tricked into thinking that these models display real intelligence. They represent extremely sophisticated statistical metrics that excel at predicting the most likely word to appear next in a sentence. But it does so without any understanding whatsoever of what even the word, let alone the sentence, paragraph, or article of which it forms a part, actually means. 

For all of Alan Turing’s brilliance and far-sightedness, his test to determine whether or not a machine may be said to possess intelligence is no longer fit for purpose. It’s time to dump it and replace it with something better. Let’s not prove experts ringing the alarm about AI’s dangers right.

Editors’ Note: This article is a revised and expanded version of one initially published on LinkedIn. You can find Paul Shotton on LinkedIn here.


Want to improve your financial literacy beyond these money basics lessons? Make sure you check out the Financial Poise On-Demand Webinar Series. From how to invest to how to build a business, the topics covered are all but endless! Click here to learn more about our offerings.


© 2023. DailyDACTM, LLC d/b/a/ Financial PoiseTM. This article is subject to the disclaimers found here.

Share this article:

About Paul Shotton

Paul Shotton is the CEO of Tachyon Aerospace, an aerospace technology company, and the Founder of White Diamond Risk Advisory, which advises CEOs, boards, young entrepreneurs, and start-up companies on how to grow revenues, how to maximize operational leverage, and how to identify risks, so as to ensure they are adequately compensated or else mitigated.…

Read Full Bio »   •   View all articles by Paul Shotton »

follow me on:

Article Comments

>