The following is not a thesis. EDIT: also, I've been shown that it's wrong. So, uh, never mind, I guess?
I had a long and generally irritating conversation on IRC with a singularitarian who repeatedly referred to the concept of a "general AI", but failed to ever give a coherent definition of this term, or even a single straight answer to any of my direct questions on the subject.
The premise for the concept of the technological singularity is:
- that the rate at which technology advances is keyed to the intelligence of human beings - the smarter we become, the faster technology advances; and
- that a point will come at which technology has advanced far enough that it can be used to augment human intelligence.
Once this point is reached, the rate at which technology advances is keyed to the intelligence of human beings, but this in turn is keyed to the current state of technology. A feedback loop arises and technological advancement shoots off exponentially, resulting in indescribable "transhuman intelligences".
Or put it another way. Let's say that it takes human beings 100 years to build a computer as intelligent as humanity is. After 100 years the amount of intelligence in the world has doubled. Because of this extra intelligence, it takes only 50 years to repeat the same feat. Now there is four times as much intelligence and it only takes 25 years. Then 12.5. Then 6.25. After 200 years, the amount of intelligence on Earth is infinite. Or, at the very least, maxed out.
That's the prediction, and one with which I disagree in the strongest possible terms. In brief, my objection is to the second point. I believe that it is impossible for human beings to create a human-equivalent artificial intelligence. Even ignoring the other glaring, gross oversimplifications in either model, I believe that is impossible for a point to be reached where technology is advanced enough to improve its own intelligence.
Firstly, how do you define intelligence? Certainly it's not a substance like oil or snow, something that you can just churn out in factories and shunt around in trucks. It's not a simple, measurable quantity.
Right now - and I may be proven wrong imminently once this has been pushed out online - I believe that "intelligence" boils down to pure problem-solving capability.
Let us assume for the sake of argument that it is possible to conceive of, and possibly even define mathematically, a universal set of Problems, each problem having a Solution. Thus, we have a set of ordered Problem/Solution pairs. A Problem can be "find the real roots of x2 - 1 = 0", or "walk across this room". It can be "draw this person" or "find the moral of Moby Dick" or "mate in three moves" or "put all your clothes on in the correct order" or "learn to ask for this meal in French" or "put a man on the Moon" or "reverse this sentence phonetically" or "find a grand unified field theory" or "write the Great American Novel". Solutions may be singular, plural, infinitely numerous or non-existent. Solutions may be objective or subjective. In the subjective case, we can simply split the Problem into N Problems - one for each possible observer of the Solution. "Write something that [Barack Obama, Sam Hughes, Lady Gaga] would consider to be the Great American Novel". I thus define "intelligence" as an innate capacity to devise the Solutions to some subset of all those possible Problems.
The great thing about this definition is that Turing's test is just one such subjective Problem: "Convince [a specific human being], through the medium of text, that you are also a human being". Passing it simply proves that you - whatever you are - can pass it. (I mean seriously, how stupid is the Turing Test? I'm a human being just because I can convince another human being that I am one? Do you know how easy it is to trick people in believing that false things are true? What if I convince them that I'm the messiah? Would that make me the messiah?)
Clearly intelligence is already an infinite-dimensional beast. Clearly, pocket calculators and computers are intelligent in their own small way, being capable of handling a few very rigidly defined classes of Problems. Chess computers can handle very advanced Problems Of A Chess-Related Nature. Ants can walk and search at random and bring food back to their nests. Squirrels have advanced pathfinding capability. Dogs can learn tricks. Dolphins and apes are properly advanced and can learn a very great deal. Humans working alone may be profoundly stupid-- either due to poor luck in the genetic lottery or a simple lack of education-- or incredibly intelligent. Human intelligences can be completely orthogonal to one another; two people can both be "incredibly intelligent" while having completely distinct fields of expertise. Multiple humans working in concert can solve incredibly wide ranges of Problems. A human working in concert with his or her computer? Even greater still. The internet is gradually linking people and information up to such an incredible extent that it becomes harder to see "who" exactly has solved a given Problem.
Let's take a closer look at the inner workings of a computer, since it is presumably going to become the basis for any putative artificial intelligence that the human race eventually calls into existence. Does the computer do anything that a sufficiently numerate network of slaves could be forced to do (if whipped hard enough for long enough)? The magic of a computer is in its human programmers' capability to break down every problem, no matter how complex, into a matter of extraordinarily simple binary arithmetic operations. Humans can do binary arithmetic. Humans can also break problems down into smaller problems. How else were the computers originally designed and programmed? Oh, so we used computer-aided design? Well, we programmed those computers too. At the very bottom layer, the whole thing is run by human beings.
Searle's Chinese Room-- that is, the combination of a human being and the stack of manually-worked machine code-- can indeed solve the Problem of "convince this human Chinese speaker that you, too, are a human Chinese speaker". The person operating the room, however, cannot solve this Problem alone. This is because the operator could not devise the computer program by himself. The critical component here, cunningly omitted from the problem, is the original author of the computer program. That guy can definitely speak Chinese. He isn't physically present in the room, but his code is, which means that he, and anything he could theoretically be capable of, is present too. Time constraints notwithstanding, Searle's Chinese Room can't do anything that its operator and the computer program's original author(s) couldn't do in concert.
I have studied mathematics; I have faced problems which, even when given all the necessary facts, I was unable to solve. Some of them, I made headway in, but making headway doesn't solve the problem. It simply breaks the Problem into two smaller Problems, one of which I have solved and the other I have not solved. I don't think it's possible to be sure how difficult a Problem is until it's been solved, only to set lower bounds on the problem's "difficulty". All insurmountable problem are equal in their insurmountability.
I wrote an evil Tetris AI. For some days after I wrote it, it would surprise me momentarily with its choices of pieces. Then I would look a little more closely and think more carefully and realise that it had indeed made the worst choice. It looked for a moment as if the AI was smarter (or dumber) than me, but in fact it was doing exactly what I had told it to do and nothing which I couldn't have done myself longhand given the time.
Can a chess computer program a smarter chess computer?
Can an author write about a character who is much smarter than the author is, who solves much more difficult problems than the author does? Well, yes. But, faced with a real Problem which he cannot solve, can an author solve the problem by writing about a fictional character who can solve the problem-- and then just steal his fictional creation's solution?
"Create an artificial intelligence which is smarter than you are" is a Problem.
"...is smarter than you are" is shorthand for "...can solve a superset of the Problems which you can solve by yourself" which is shorthand for "...can solve a Problem that you cannot".
If you can program a computer to solve a problem for you, then you can solve that problem yourself... given time. And if the AI can do something that you can't, then that's a contradiction in terms. Therefore, no human can create a superhuman AI.
What about a human-equivalent AI? Can a human create a machine capable of solving every problem that the human can solve? Or even most of those problems?
How could such a machine be created? Well, you could simply pose every imaginable problem to the human in turn, write down all of his answers and hard-code those into the machine. That's impossible, because there are uncountably many problems. You could take a precise reading of the structure of the human's brain and simulate that brain inside a computer. But taking this initial reading is impossible in practice right now, and may remain so indefinitely, and computers need to be, conservatively, ten orders of magnitude more powerful before the simulation step becomes possible. Our final step is to determine the common features and underlying processes which conspire to form the human's responses to questions; finding the critical points in the infinite-dimensional space of Solutions and interpolating between them. We would have to find a pattern. We would need our human - or the team of humans working with him or her - to understand the inner workings of the human brain better than the original human.
It may be possible to ultimately decode an ant brain or even a sheep brain. But I believe that human brains are far too advanced and far too closely aligned to understand one other, and that it is a logical paradox for a human brain to fully comprehend itself. You are your brain. To completely understand yourself, you would become a living quine. Even if that were possible, you could never accurately predict your own responses to a situation or Problem, because that would mean you could think faster than yourself.
Is any of this making sense?
Difficult as it is to conceive of, there is a vast class of problems which creatures with 250 IQs can solve-- if not easily, then at least with a little thought-- but which no human can approach. These problems are distinct from the genuinely unsolvable Problems, but there's no way for us to tell which Problem falls into which category because we can never be smart enough. It is possible to conceive of a superhuman species to whom all human behaviour is transparently obvious and predictable.
No being can fully know itself, but this is a necessary prerequisite to build a working copy of oneself. Sure, maybe you can clone a human. That's genetics. Can you clone her childhood and everything else that led to the final person whom you cloned?
Lesser intelligences cannot create greater intelligences... dumb luck notwithstanding.