As demand for key minerals grows and their supply becomes constrained by geopolitical tensions, advanced industries are seeking alternative sources.
December 28, 2023
Is the Turing Test still even a good standard for determining genuine artificial intelligence?
When Alan Turing invented the computer in 1939, an analog machine to break the Nazis’ Enigma Code, he was already thinking about how computers may one day become artificial minds.
How will we know when machines can think? To answer that question, he devised the “imitation game,” which we now call the Turing Test.
At the time, computers couldn’t even process a document, send an email, or do any of the other now-mundane tasks that we expect from them. Heck, there weren’t even garage-sized mainframes or electrical transistors. Computers weren’t yet digital.
And yet Turing, who understood in some deep way what computation is at its core, saw the building blocks of thought. Perhaps it even seemed inevitable that, one day when hardware and software became sufficiently advanced—nearly 100 years later at this point—computers would become artificially intelligent.
This milestone, he reasoned, would happen when an ordinary person could have a conversation with a computer and not be able to tell whether or not they were speaking with a person or a machine.
So now let me ask you, reader: when you interact with AI chatbots, can you tell the difference?
I’m sure some of you can, but there are also plenty who can’t. They’re very convincing, especially if we don’t look too close or probe too deep.
And those capabilities are only going to expand in coming years. I’d bet my desktop that, by the time computation turns 100 in 2039, machine learning technology will have grown significantly, maybe even exponentially.
But remember that the technology industry has touted AI for a while now; the first AI summit, the Dartmouth workshop, was held in 1956. Over the decades, we saw the emergence of rules-based decision-tree systems that were far from Turing Test–approved. And no matter how you feel about ChatGPT, the “narrow AI” systems of the past five or so years also couldn’t come close to passing a Turing Test.
So we’ve devised a moniker, artificial general intelligence, a.k.a. AGI, to differentiate these two paradigms.
AGI, so the thinking goes, will be the name for the technology that passes the Turing Test with flying colors. It’s the one that can hold a conversation just as well as you or I can, the one that will be liberated from the narrow use cases that we demand of today’s AI: recognizing images, detecting anomalies in data, sorting objects (or people) into categories.
If today’s chatbots are approaching Turing Test certification, tomorrow’s AGI will smash it.
And that’s really the plan. Sam Altman, CEO of OpenAI, a closed-source AI development firm that’s accepted over $10B in investment from industry juggernauts like Microsoft, says that AGI is on its way, coming soon, or maybe already here. His company, which develops blockbuster applications like ChatGPT and DALL-E, already uses the term “AGI” in their marketing and other communications.
Altman and his disciples may already see their AI as general; maybe their chatbot’s ability to imitate speech was strong enough to give it a Turing pass.
But here’s the problem, both with today’s imitation powers and maybe even with the Turing Test in general:
Human experience does not boil down to statistical probability.
LIKE WHAT YOU’RE READING?
Get more, straight to your inbox.
Even when we look at physics and the probabilistic interplay of subatomic particles as described by quantum mechanics, we’re left with a discrepancy that remains unsolved, despite generations of brilliant minds searching for a Grand Unified Theory of Everything.
Probability plays its role, especially in the ultrasmall, but there remain fundamental conflicts between the microscopic quantum world and the macroscopic reality we inhabit along with our planet, our star, and even our galaxy. Particle physics can’t explain gravity.
This discrepancy between Einstein’s Theory of General Relativity and quantum mechanics remains a thorn in scientists’ sides, or—depending on who you ask—it’s proof that there is mystery in this world.
All of this is to say that human experience is not governed by probabilistic machinations any more than celestial bodies follow the rules of quantum mechanics.
There’s something else. There’s something that defies computation, something that can’t be generated by ingesting every word that came before and running a calculation for the next most probable word.
Even just among English speakers, how many words have we devised to help point us toward this mystery? Spirit, soul, God, essence, mysticism, and magic are just some of them.
Will AGI ever be able to replicate that? I know enough to say that I can’t know.
What I do understand, however, is that these intangible, unquantifiable concepts should not be dirty words among technologists. This is especially true as AI continues to improve.
Hard-nosed pragmatists may scoff. Much better to fully illuminate the room, they believe. Better to distill both nature and our experience of living within it down into bits.
Even if a computer can play an imitation game, even if you’re not always sure there’s a person on the other end, you may feel that something is off. Even if you can’t quite place it.
Look toward the shadows.
As demand for key minerals grows and their supply becomes constrained by geopolitical tensions, advanced industries are seeking alternative sources.
Quantum computers will break today’s encryption. New standards from NIST aim to prevent that.
Reality is there, whether or not we are able to swallow it whole and come up with tidy words to describe it.