The Turing Test, as any self-respecting reader of this website will know, is a test designed to see whether a machine is capable of human thought.
It works like this. A (biological) human, the tester, sits at a computer terminal. Behind a curtain sits something which may be another biological human or may be a machine. They communicate via a text interface (instant messaging is the usual interpretation of this), with the tester unaware of who it is he is communicating with.
If, after a lengthy chat, the tester concludes that the thing he or she is communicating with is human, but it is actually a machine, then the machine is "officially" capable of human thought. You'd repeat the experiment several times, putting humans behind the curtain half the time, just to make sure the result isn't a fluke, but that's the essence of the test.
Now, here's what's wrong with it.
The major problem which could arise is if a human being was placed behind the curtain, and the tester concluded that he or she was a machine. That would be bad, obviously! Wouldn't it? It would mean that the test isn't trustworthy, because a machine could, for example, perfectly imitate that human being and therefore be human but still fail the test.
Now consider all the different types of "irregular" human beings who would completely fail a Turing Test:
Now imagine how increasingly easy it would be for a machine to simulate the responses of each of these different types of human. All of these people would be capable of human thought - to some extent. But could the machines which emulated their communications be considered capable of human thought?
Anyway, the main point I take away from this is that the Turing Test does not test one's ability to think so much as it tests one's ability to communicate, something a thinking being need not necessarily be capable of. It is surely fallacious to count a being's information input and output transducers as part of its consciousness.
Of course, this makes the problem of detecting human thought a heck of a lot more difficult. Essentially, we must define thought based not on just a single medium of communication but on all the sensory inputs and outputs a being has, and the way it processes input into output - and we must be prepared to define thought in the absence of either, which presents obvious conundrums (how do you find out what's going on in a brain without altering it in some way, indirectly providing it with input?).
There's one other problem with the Turing Test, and that is its scope. All it can test for is humanity.
Edsger W. Dijkstra once remarked, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Thinking about this, I came to the conclusion that the different ways in which (say) a yacht, a submarine, a penguin and a jellyfish move are all merely styles of moving through water. A yacht sails, a submarine pushes itself using propellors, a penguins wiggles itself in that strangely biological way we call "swimming", and a jellyfish... well, bloops, I suppose. In the same way, conscious human thought, unconscious human thought, the thought processes of a baby and those of a chatbot or a computer or a calculator are simply styles of the same mighty concept of information processing and storage.
On the lowest level, humans are machines. All of us - humans and machines - process information in different ways. That distinction is no longer problematic. It's a continuum, with intelligence in information-processing-and-storage merely forming variables in that continuum.
Therefore, the thorny philosophical question, "Can a machine think*?" now becomes a practical one: "How close can we get?" To what degree of accuracy can we build something which emulates the information processing and storage capabilities of, say, a "normal", conscious, English-speaking human?
Since we are machines, the answer is, clearly, "In theory, arbitrarily close," though the magnitude of the task of even getting a millionth of the way there is obvious.
But I think even making the first step along this road rules out possibilities. Suppose we did get there. Suppose we built an artificial human brain. Apart from the obviously gigantic leaps in neuroscience and technology that would have to have been made, what would we gain from that? Philosophically? Real human brains are very easy to come by.
The Turing Test can only test for humanity, and there are, surely, other possible forms of intelligent thought, which we could be aiming for instead. How would an "truly intelligent" machine differ from an intelligent human or a machine simply emulating an intelligent human? What about alien intelligences? What about the thing we're really hoping for, secretly: something which combines the flexible and creative and slippery thought processes of a human being with the unimaginably fast and accurate calculatory abilities of a computer? Would a superhuman intelligence register as human? Do we have the faintest conception of how a superhuman intelligence would behave? Or whether we would even be able to recognise one?
Considering this, it seems to me that the Turing Test only detects thought on a very specific combination of "wavelengths" in a phase space of uncountably many dimensions.
While better tests doubtless exist, I believe that figuring out a definitive, algorithmic process for recognising every possible type of intelligent thought is about as difficult - and for the same reasons - as figuring out a way to recognise every possible graphical representation of the letter "A".**
In fact, given the limitations of human thought itself, I suspect the problem may be even harder. As a last resort, humans may be consulted to determine, collectively, what is or is not an "A", but as we are, ourselves, intelligent beings, it may be simply impossible for us to devise and perform a similarly definitive test to divide the universe into intelligent beings and dumb terminals. I suspect that it may be the case that whatever you do, there will always be an "A" you didn't think of: a form of intelligent thought which no test, no human, even, recognises as such.
Which is heartening, because it means the Turing Test is at least useful, serving as a fair first approximation to this unattainable Magic Test. But it is also a little scary, because what if the Turing Test, or some highly advanced and refined yet still necessarily imperfect Turing-Plus Test becomes law someday? Who knows what perfectly intelligent organisms or programs we might end up discarding as failures?
* An interesting follow-up question to "Can a machine think?" is "Will a machine ever be legally recognised as a person?" A very satisfactory answer to this popped up in the Everything2 chatterbox a while ago. This is a legal, rather than a philosophical question. Think about corporations. If a corporation can be legally defined to be a person, why not a machine? If it's profitable, you can bet the law will be passed. Thus, machines are very likely to become people, MUCH sooner than they are ever likely to become intelligent.
** Heh, can you tell I've been reading Metamagical Themas lately?