Computer pioneer Alan Turing’s remarks in 1950 on the question, “Can machines think?” were misquoted, misinterpreted and morphed into the so-called “Turing Test”. The modern version says if you can’t tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like “thinking” and “intelligent” to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let’s put this new software to the Turing Test - by Grabthar’s Hammer, it passed! We’ve achieved Artificial Intelligence!
The problem with the experiment is that there exists a set of instructions for which the ability to complete them necessitates understanding due to conditional dependence on the state in each iteration.
In which case, only agents that can actually understand the state in the Chinese would be able to successfully continue.
So it’s a great experiment for the solipsism of understanding as it relates to following pure functional operations, but not functions that have state changing side effects where future results depend on understanding the current state.
There’s a pretty significant body of evidence by now that transformers can in fact ‘understand’ in this sense, from interpretability research around neural network features in SAE work, linear representations of world models starting with the Othello-GPT work, and the Skill-Mix work where GPT-4 and later models are beyond reasonable statistical chance at the level of complexity for being able to combine different skills without understanding them.
If the models were just Markov chains (where prior state doesn’t impact current operation), the Chinese room is very applicable. But pretty much by definition transformer self-attention violates the Markov property.
TL;DR: It’s a very obsolete thought experiment whose continued misapplication flies in the face of empirical evidence at least since around early 2023.
It was invalid when he originally proposed it because it assumes a unique mystical ability for the atoms that make up our brains. For Searle the atoms in our brain have a quality that cannot be duplicated by other atoms simply because they aren’t in what he recognizes as a human being.
It’s why he claims the machine translation system system is incapable of understanding because the claim assumes it is possible.
It’s self contradictory. He won’t consider it possible because it hasn’t been shown to be possible.
The Chinese room experiment only demonstrates how the Turing test isn’t valid. It’s got nothing to do with LLMs.
I would be curious about that significant body of research though, if you’ve got a link to some papers.