The misleading side of the Turing Test
In the 1950s, when Artificial Intelligence was merely a science-fiction concept, Alan Turing came up with a test to figure out if a computer or AI system could think like a human. This test was like a game where a computer and a human would interact with a separate Judge, both trying to convince the Judge that they were the real human, and not the other.
So, a common scenario of an imitation game, set up so that both Human and Machine could interact with the Judge only by text messages on a computer screen. If the Judge would mistakenly identify the Human at least half of the times, then it would be said that the Machine passed the test.
When hearing about this scenario, one cannot help but think of the best possible thing to ask to establish that elusive real self-consciousness — maybe through calling for the understanding of emotions, maybe discussing art or addressing philosophy (you know, that very-human stuff). And the most frustrating part is not even that all these matters quickly fade out as decisive evidence — as the imitation game can go as long as imitating these just as well. Any effect can be mimicked in the absence of the cause.
No — the most frustrating part is switching this mental scenario to being the evaluated Human yourself. Why the instinct to think of ourselves as only the Judge in this Turing tournament? Maybe we, ourselves, are the ones to prove their own Humanity — and do so only with the use of the keyboard and screen. Now this is a deep-dive into existentialism right there.
This, too, is not a totally new avenue of reason. We play the Imitation game throughout our day, both as the Judge and as the suspected Imitator. It’s called Society. With each new interaction we have to decide if the other one has any malicious hidden intention, with each new piece of information we have to at least filter the bias (and in the worse cases, detect de fake news)— are we subject to some very well executed piece of propaganda or is the other side?
But let us return to the classical (and friendlier) definition of the Turing Test. The problem with this paradigm is that it stresses the importance of some intelligence attributes on the Machine part, but at the same time reduces the problem to a very low-dimensional shape. See, it’s no coincidence that the test defines the Human and Machine as communicating (only) through text. Intuitively, we all understand the importance for this clarification: it would be much harder were they to communicate through, let’s say, speech, or even so, as a face-to-face debate.
Because all these would be additional layers the Machine would have to imitate as well. We currently have Artificial Intelligence able to hold very decent conversations — in text. We also have systems (albeit not as performant) able to synthesize speech from written content. And, just as well, we have humanoid robot technology that imitates human movements and gestures (pretty convincing if seen from a sufficient far away distance). We may imagine a specific scenario where each one of these would successfully pass a Turing test of its own, even today, September 6th, 2023. But having them all together simultaneously? That is not something we’d have too soon.
This is the importance of the communication medium in the Turing Test. The question of intelligence in the test is, in fact, a question of how low- or high-definition is the communication window between the Human, Machine and the Judge. All communicated intelligence in the test is limited to its own projection in this expression space. It’s like a game of shadows in which we never can know more about the others than what we manage to see on the projection screen. And this raises some new problems.
No lower dimensional projection can encompass all of the original information. And this is not necessarily a problem — in fact, quite the opposite. The world we live in relies abstractly and practically on using projections rather than the original counter-part. No advanced civilization is built without the capability for abstracting the features relevant to the context while ignoring the others — just as no 5th grade physics problem would be solvable, were we to account for each friction force or external acceleration.
The matter is not as straight-forward when discussing Human-to-human interactions. In a way, we have been accustomed to this kind of practical superficiality since forever. If you go to the post office, you won’t be particularly interested in the intimate life of the clerk handling your letters, but only in the projection of his life that is his current job. And we call someone who efficiently detaches the themselves from the job “a professional” and rightfully so, reward and encourage it.
The first workforce sectors to be (already) taken over by Artificial Intelligence are, by no coincidence, Customer Support and Copy-writing. And they did so not because the AIs have achieved self-awareness or any deeper form of thought. No, they didn’t because we know the exact mathematical criteria these AI models were trained against (these models were trained on a text completion task — generating the next token for a given sequence). The fact is that, in these situations, true Human intelligence is somewhat an overkill. The job of a customer support agent requires answering a set of questions with pre-defined information and announcing a supervisor when needed. Why would we need the self-awareness of a living-, breathing-, loving-, suffering-, caring-, thinking- human for this?
Apart from the question of unemployment (which has been recurrent regardless of the technological advancement), this does not necessarily pose an existential crisis for humankind. But now consider how the personal life of an individual has changed, as well. We are gradually turning our innermost necessities into optimized commodities: the need of relating to a friend, the need for connecting with a partner. Even the need to be able to randomly strike an unexpected conversation with a stranger. All this is fundamentally tied to our own sense of being, which isn’t compatible with the projection in the lower-dimensional space of efficiency.
We find ourselves more accustomed to sending an emoji, rather than elaborating our response in a sentence (In fact it already feels awkward to do otherwise). We can exchange memes and instantly convey a certain type of attitude towards a however-complex topic. An image is worth 1000 words — so make a video! It’s true, there’s efficiency at play here. But let us not forget, this is not us working at our job at trying to do more in less the time, this is us trying to optimize being ourselves.
The Turing Revolution, let’s call it — the moment when human-like machines will operate seemingly indistinguishable from their fellow human counter-parts — might not be that far down the road. It doesn’t have to be all cyborgs and robots and what-nots. It might as well be much more simpler and familiar to the present day. We might have turned our interactions into something sufficiently superficial — so that there is no more granularity to distinguish, no more substance to sense behind. The game of imitation reduces to the same usage of the same emojis. We might lose the Turing test ourselves, not for the intelligence of Machines but for the flatness of Humanity.