Why there will never be human-equivalent AI

The following is not a thesis. EDIT: also, I've been shown that it's wrong. So, uh, never mind, I guess?

I had a long and generally irritating conversation on IRC with a singularitarian who repeatedly referred to the concept of a "general AI", but failed to ever give a coherent definition of this term, or even a single straight answer to any of my direct questions on the subject.

The premise for the concept of the technological singularity is:

  1. that the rate at which technology advances is keyed to the intelligence of human beings - the smarter we become, the faster technology advances; and
  2. that a point will come at which technology has advanced far enough that it can be used to augment human intelligence.

Once this point is reached, the rate at which technology advances is keyed to the intelligence of human beings, but this in turn is keyed to the current state of technology. A feedback loop arises and technological advancement shoots off exponentially, resulting in indescribable "transhuman intelligences".

Or put it another way. Let's say that it takes human beings 100 years to build a computer as intelligent as humanity is. After 100 years the amount of intelligence in the world has doubled. Because of this extra intelligence, it takes only 50 years to repeat the same feat. Now there is four times as much intelligence and it only takes 25 years. Then 12.5. Then 6.25. After 200 years, the amount of intelligence on Earth is infinite. Or, at the very least, maxed out.

That's the prediction, and one with which I disagree in the strongest possible terms. In brief, my objection is to the second point. I believe that it is impossible for human beings to create a human-equivalent artificial intelligence. Even ignoring the other glaring, gross oversimplifications in either model, I believe that is impossible for a point to be reached where technology is advanced enough to improve its own intelligence.

*

Firstly, how do you define intelligence? Certainly it's not a substance like oil or snow, something that you can just churn out in factories and shunt around in trucks. It's not a simple, measurable quantity.

Right now - and I may be proven wrong imminently once this has been pushed out online - I believe that "intelligence" boils down to pure problem-solving capability.

Let us assume for the sake of argument that it is possible to conceive of, and possibly even define mathematically, a universal set of Problems, each problem having a Solution. Thus, we have a set of ordered Problem/Solution pairs. A Problem can be "find the real roots of x2 - 1 = 0", or "walk across this room". It can be "draw this person" or "find the moral of Moby Dick" or "mate in three moves" or "put all your clothes on in the correct order" or "learn to ask for this meal in French" or "put a man on the Moon" or "reverse this sentence phonetically" or "find a grand unified field theory" or "write the Great American Novel". Solutions may be singular, plural, infinitely numerous or non-existent. Solutions may be objective or subjective. In the subjective case, we can simply split the Problem into N Problems - one for each possible observer of the Solution. "Write something that [Barack Obama, Sam Hughes, Lady Gaga] would consider to be the Great American Novel". I thus define "intelligence" as an innate capacity to devise the Solutions to some subset of all those possible Problems.

The great thing about this definition is that Turing's test is just one such subjective Problem: "Convince [a specific human being], through the medium of text, that you are also a human being". Passing it simply proves that you - whatever you are - can pass it. (I mean seriously, how stupid is the Turing Test? I'm a human being just because I can convince another human being that I am one? Do you know how easy it is to trick people in believing that false things are true? What if I convince them that I'm the messiah? Would that make me the messiah?)

Clearly intelligence is already an infinite-dimensional beast. Clearly, pocket calculators and computers are intelligent in their own small way, being capable of handling a few very rigidly defined classes of Problems. Chess computers can handle very advanced Problems Of A Chess-Related Nature. Ants can walk and search at random and bring food back to their nests. Squirrels have advanced pathfinding capability. Dogs can learn tricks. Dolphins and apes are properly advanced and can learn a very great deal. Humans working alone may be profoundly stupid-- either due to poor luck in the genetic lottery or a simple lack of education-- or incredibly intelligent. Human intelligences can be completely orthogonal to one another; two people can both be "incredibly intelligent" while having completely distinct fields of expertise. Multiple humans working in concert can solve incredibly wide ranges of Problems. A human working in concert with his or her computer? Even greater still. The internet is gradually linking people and information up to such an incredible extent that it becomes harder to see "who" exactly has solved a given Problem.

Let's take a closer look at the inner workings of a computer, since it is presumably going to become the basis for any putative artificial intelligence that the human race eventually calls into existence. Does the computer do anything that a sufficiently numerate network of slaves could be forced to do (if whipped hard enough for long enough)? The magic of a computer is in its human programmers' capability to break down every problem, no matter how complex, into a matter of extraordinarily simple binary arithmetic operations. Humans can do binary arithmetic. Humans can also break problems down into smaller problems. How else were the computers originally designed and programmed? Oh, so we used computer-aided design? Well, we programmed those computers too. At the very bottom layer, the whole thing is run by human beings.

Searle's Chinese Room-- that is, the combination of a human being and the stack of manually-worked machine code-- can indeed solve the Problem of "convince this human Chinese speaker that you, too, are a human Chinese speaker". The person operating the room, however, cannot solve this Problem alone. This is because the operator could not devise the computer program by himself. The critical component here, cunningly omitted from the problem, is the original author of the computer program. That guy can definitely speak Chinese. He isn't physically present in the room, but his code is, which means that he, and anything he could theoretically be capable of, is present too. Time constraints notwithstanding, Searle's Chinese Room can't do anything that its operator and the computer program's original author(s) couldn't do in concert.

*

I have studied mathematics; I have faced problems which, even when given all the necessary facts, I was unable to solve. Some of them, I made headway in, but making headway doesn't solve the problem. It simply breaks the Problem into two smaller Problems, one of which I have solved and the other I have not solved. I don't think it's possible to be sure how difficult a Problem is until it's been solved, only to set lower bounds on the problem's "difficulty". All insurmountable problem are equal in their insurmountability.

I wrote an evil Tetris AI. For some days after I wrote it, it would surprise me momentarily with its choices of pieces. Then I would look a little more closely and think more carefully and realise that it had indeed made the worst choice. It looked for a moment as if the AI was smarter (or dumber) than me, but in fact it was doing exactly what I had told it to do and nothing which I couldn't have done myself longhand given the time.

*

Can a chess computer program a smarter chess computer?

Can an author write about a character who is much smarter than the author is, who solves much more difficult problems than the author does? Well, yes. But, faced with a real Problem which he cannot solve, can an author solve the problem by writing about a fictional character who can solve the problem-- and then just steal his fictional creation's solution?

"Create an artificial intelligence which is smarter than you are" is a Problem.

"...is smarter than you are" is shorthand for "...can solve a superset of the Problems which you can solve by yourself" which is shorthand for "...can solve a Problem that you cannot".

If you can program a computer to solve a problem for you, then you can solve that problem yourself... given time. And if the AI can do something that you can't, then that's a contradiction in terms. Therefore, no human can create a superhuman AI.

*

What about a human-equivalent AI? Can a human create a machine capable of solving every problem that the human can solve? Or even most of those problems?

How could such a machine be created? Well, you could simply pose every imaginable problem to the human in turn, write down all of his answers and hard-code those into the machine. That's impossible, because there are uncountably many problems. You could take a precise reading of the structure of the human's brain and simulate that brain inside a computer. But taking this initial reading is impossible in practice right now, and may remain so indefinitely, and computers need to be, conservatively, ten orders of magnitude more powerful before the simulation step becomes possible. Our final step is to determine the common features and underlying processes which conspire to form the human's responses to questions; finding the critical points in the infinite-dimensional space of Solutions and interpolating between them. We would have to find a pattern. We would need our human - or the team of humans working with him or her - to understand the inner workings of the human brain better than the original human.

It may be possible to ultimately decode an ant brain or even a sheep brain. But I believe that human brains are far too advanced and far too closely aligned to understand one other, and that it is a logical paradox for a human brain to fully comprehend itself. You are your brain. To completely understand yourself, you would become a living quine. Even if that were possible, you could never accurately predict your own responses to a situation or Problem, because that would mean you could think faster than yourself.

Is any of this making sense?

Difficult as it is to conceive of, there is a vast class of problems which creatures with 250 IQs can solve-- if not easily, then at least with a little thought-- but which no human can approach. These problems are distinct from the genuinely unsolvable Problems, but there's no way for us to tell which Problem falls into which category because we can never be smart enough. It is possible to conceive of a superhuman species to whom all human behaviour is transparently obvious and predictable.

No being can fully know itself, but this is a necessary prerequisite to build a working copy of oneself. Sure, maybe you can clone a human. That's genetics. Can you clone her childhood and everything else that led to the final person whom you cloned?

Lesser intelligences cannot create greater intelligences... dumb luck notwithstanding.

Back to Blog
Back to Things Of Interest

Discussion (50)

2010-06-16 16:57:06 by Dentin:

A lot of your argument seems to revolve around the definitions picked for the words, and your interpretation of them. One example of this is 'smarter than', the definition of which happens to give your conclusion by tautology.

I'd prefer a different definition of 'smarter than', which does not relate to supersets. What if 'smarter than' simply means, 'can solve a given problem faster'? In that case, human/machine hybrids are already 'smarter than' humans from previous generations.

Even the simulation example fits in with this: simulate me in a machine, and run it ten times faster than real time. It can solve problems in vastly less time than I can, and because it learns faster, it will in general beat me to the punch at anything I try. I think that's a more reasonable definition of 'smarter' than yours.

That said, I'm not a particularly big fan of the 'generalized' AI concept. Everything has problem sets that it's good at, and problem sets that it's not good at. Even between humans, there is a vast disparity in problem sets that can be handled; why should we expect AI to be any different? In my mind, human-equivalent AI spans an enormous range, where it could cover the retarded and autistic all the way up to genius generalist or specialist.

As for Searle, it's beyond me why anyone brings him up anymore. His arguments seem to me as specious as Pascal's Wager, and took less time for me to see through.

2010-06-16 17:02:54 by qntm:

Half of the difficulty in this entire field of study is getting your terms right. I like to think I picked some pretty good definitions here which make it relatively easy to contextualise what we already know and what we want to do.

Are you saying that there are AIs out there which are just as capable of problem-solving as humans, just slower? I don't think so.

And also: no matter how long I lived, or how fast my brain was simulated, there are problems that I would *never* be able to solve because I am *not smart enough*. An ant cannot prove Pythagoras' Theorem, even an ant uploaded into a computer and run at a trillion times its normal thought rate. This is why I discard speed in favour of hard solvability of problems.

2010-06-16 17:24:16 by Tyler:

The one flaw (fatal as it is) in your argument is that you are forgetting that the human species is a social one, especially in problem-solving. You are correct that one human cannot create something more intelligent than itself, but it is not one human which works on that particular Problem: it is the conglomerate of our entire race. What one human contributes is his/her vast understanding of one very particularly defined field. Do this over thousands of people, and what you have is something that is more intelligent than any one human, but less intelligent than the most intelligent a person can become in any single area. This is the basis of technological progress: people solving their own littler problems that are part of a bigger problem in order to achieve what one person cannot.

I think a better conclusion you can draw is that humanity cannot create something greater than humanity. Or, in other words, we might be able to replicate ourselves some day, but we could never improve on the design.

2010-06-16 17:36:07 by YarKramer:

I confess that my immediate reaction to "reverse this sentence phonetically" was to think "phonetically sentence this reverse." (It should have been something like ... "Eelkittenoaf snitness sithe servir." At least in American English ...)

Another problem I'll just throw out is: what's the *point* of human-level AI, other than as a plot device? I mean, you could just make specialized software for a specific purpose, like we can do *right now*, and it can't be simply persuaded to malfunction, nor can it incite rebellion against its cruel human overlords. Companionship? That's just a "convince person X that you're sufficiently human-like" problem, and anyway I'm not sure "replacement for genuine human companionship" is a good thing.

For all practical purposes, at most we just need voice-commanded computers like the ones in Star Trek. Even then, the only things we haven't invented yet are voice-recognition software that doesn't need to be trained for each individual user for each individual command, and voice-synthesis software that can mimic human *speech* without sounding robotic and artificial (a vastly less-complex problem than human thought), and I'm not sure about the latter. (But what about *non*-practical purposes ...?)

2010-06-16 17:39:48 by qntm:

Because it's there, YarKramer.

2010-06-16 17:40:02 by JeremyBowers:

One traditional answer would be:

1. It may be possible for a set of humans to create a machine that can simulate (accurately) a human brain.

2. This human brain simulation may be able to run at greater than normal speed. Presumably not by a trivial amount like 20%, but many times normal speed.

3. This human brain (and friends, presumably, not just one), will be able to solve problems you couldn't because you literally won't live long enough.

4. This will include the ability to modify their own simulation. While creating "new" intelligence might be hard, it is easy to imagine that significant augmentations within the simulation are not only possible but potentially even easy. Will power is a finite resource and research suggests it is tied to glucose in the bloodstream. It is hard to replenish that glucose in the real world, simulating an infinite (in the unbounded sense) of glucose is trivial. Neurotransmitters than normally deplete can be trivially boosted. Even an afternoon's careful tinkering with a simulated brain could produce potentially a 10 IQ point boost *and* someone who could concentrate much better, which itself is an important aspect of problem solving. (As we know in the real world, ability to concentrate is arguably *more* important than IQ in terms of solving real problems.)

5. It is not hard to imagine slightly harder modifications that could correspondingly increase intelligence yet more. We are bounded by 3D for our neural layout and how many neurons we could have. It would take some experimentation to determine the best way to add neurons to the human brain, but given that our brains can already grow some new neurons and integrate them I have no doubt that significant growth in complexity could occur over time. There are some potential failure cases here, but they will be examined.

Now we have a set of humans running in faster than real time, with superior concentration, and intelligence that is probably not as bounded as ours (in terms of how much we can learn). I don't think they would be *un*bounded because I strongly suspect the neural architecture would give out at some point. (But then, learning how to integrate your simulated neurons with other non-neural intelligence sources would probably be a lot easier than trying to do that with real neurons....)

Exactly where you declare "singularity" is a matter of some interpretation; I favor the interpretation where it is the point beyond which we can not make any reasonably predictions about the nature of society. (By that definition, we're already in a Singularity relative to cavemen, but I think that reveals a true truth, so I am not bothered by it.) I think this is on the far side of this definition.

Incidentally, I tend to believe this is not possible either, but it is an argument.

2010-06-16 18:09:04 by Wrongbot:

So, here's a counter-example. Suppose that there is some problem solving algorithm that find a solution for any problem that can be solved, given enough time. Brute-force enumeration and testing of solutions is the most obvious such algorithm, but I think it's likely that there are others which are generally more efficient. Regardless, given such an algorithm, the difficulty of solving any problem can be translated into the amount of computation time required to reach a solution using the algorithm.

So we can then compare intelligences that use such algorithms purely based on their computing speed. Increasing an intelligence's processing speed by a factor of 10 would increase the number of problems it can solve per time unit by a factor of 10. Given enough computation speed, an intelligence can even perform better than another intelligence which uses a more efficient algorithm.

Each time an intelligence solves the problem "improve my hardware's processing speed by a factor of 10," that intelligence becomes ten times better at problem solving. If each iteration of that problem is more than ten times harder than the previous under a given algorithm, intelligences that use that algorithm will never reach a singularity. But if the problem only becomes nine times harder each time we iterate, then the intelligence's problem-solving capacity will grow exponentially.

The question to answer about superhuman AI, then, is whether or not such an algorithm (sometimes called a Decision Theory, I believe) exists or can be created by human beings.

2010-06-16 18:36:34 by NonenglishAI:

You assume that a superhuman AI must be created in such a way that as soon as you flip a switch it is more intelligent than a human, which cannot work, given by your definition. But if this AI would be able to learn the problem would disappear: We create an AI at the level of a human baby which "grows up" to finally surpass human intelligence. This should be theoretically possible since the AI would have more time to learn about everything, more memory to store everything ans maybe might just have more "brain"power.
If we can understand learning fully (i.e. if it is only a subset of intelligence) and implement it in an AI we can avoid the quine-ness and get an AI capable of learning indefinately, because it should theoretically be free of intellectual decay as we know it. This should only bring up the question wether the implemented learning supports almost infinite knowledge (this is up to experiment, I guess ;)

2010-06-16 19:43:21 by qntm:

Wrongbot, I disagree with much of what you said.

Firstly, the space of problems alone is uncountable, to say nothing of the space of possible solutions. No algorithm which solves every problem can exist. Even restricting ourselves to countable problems with countable solutions, the Halting Problem shows that a general algorithm for solving every problem cannot exist; it is impossible.

Secondly, efficiency of algorithm is not at stake here. The question is whether a solution can be found at all. A machine which arrives at the solution twice fast as another machine is not twice as clever as that other machine. It is merely faster. The class of problems which each cannot solve *even given infinite time* is the same.

Thirdly, "improve my hardware's processing speed by a factor of 10" is, at some point, an unsolvable problem. For real. There is an upper limit to the processing capability per unit volume of spacetime in this universe.

2010-06-16 19:47:42 by qntm:

NonenglishAI: the capability to learn doesn't equate to the capability to learn *everything*. There are some things that you, after years of education, still can't do. There are some things which, even if you had an infinite cranium, and an infinite period of education, you would still be unable to do. There are things which, even if taught, you would not be able to understand. And we cannot teach an AI to solve a problem which we ourselves cannot solve, so there are many things which nobody would be able to teach you.

True intelligence is a capability to figure something out from first principles *without* being taught. A completely new human brain is capable of doing a great deal just from a standing start, but don't pretend that people would get smarter without limit if they lived forever.

2010-06-16 20:03:03 by Kazanir:

What if you could solve the problem of, "why am I not smart enough?" What if we found a way to re-wire our brains to become smarter? What if a series of mental exercises could improve our IQs permanently, or what if we invented a psychoactive drug that would do the same? Suddenly solving a single, limited problem or class of problems would vastly expand the set of all problems our brains are capable of -- not because a programmer manually added new problems to that set, but because the brain's set of "solvable problems" is defined by how it is wired. To an extent, we're already capable of this -- we know of a great many things that affect IQ later in life, such as early childhood nutrition and education efforts. Solving that class of problems allows us to upgrade our hardware without fully understanding that hardware.

If you could program a computer to look at its own code, download data from the real world (or observe it itself via instruments) and then evaluate its results against that data, you would be programming that computer to evaluate the functionality of its own programming. If you could program it to modify its own code once it found erroneous sections, you would have a program that could essentially increase its own IQ. Rather than programming a computer to speak Chinese, you could program a computer to teach itself to speak Chinese with input from the outside world -- the same way a human does it. I don't think its unreasonable to say that given sufficiently advanced programming of some fundamental algorithms that you could program a computer to know how to learn. At the point when a computer is able to evaluate and write its own code to add to its own program is the point when you're going to have something similar to human intelligence under your definition of "able to solve problems."

2010-06-16 20:17:36 by BenFriesen:

I've always thought the more likely method of solving the general AI problem would be to cheat - projects like Blue Brain are attempting to simulate all the pieces of the brain so that we have a working copy that does in digital what the brain does in analog. From there you can do some actual experimentation with it, and hopefully make it smarter through "luck" or an actual understanding of what makes a brain smart.

2010-06-16 20:18:40 by JeremyBowers:

Well, Sam, you've loaded the deck against yourself a bit with your first definition of intelligence. :) I'm a programmer, and there are a lot of programs that I could personally write, if I were going to live and work on them for 200 years, that I can't actually write (alone) because I will die first. Those programs represent solved problems. Therefore, me-who-lives-200-years is more intelligent than me-who-lives-70-if-I'm-lucky. Useful programs fit into this class, too, things like operating systems or significant applications.

You've given two inconsistent definitions of intelligence now. First you said it was "pure problem-solving capability", then "[t]rue intelligence is a capability to figure something out from first principles without being taught." I'll accept either for the sake of debate, but they are two different things.

2010-06-16 20:38:51 by qntm:

Firstly, Jeremy, just because you don't have time to write a program doesn't mean you'd be incapable given the time. If you can do something in 200 years, then you can do it. Of course, reality presents real upper limits on timescale, but those limits just prove my point still further. I'm saying that given any amount of time, we can't make a human AI.

Secondly, those two definitions are consistent. "From first principles" is what I mean by "pure", and "figure something out" is another way of saying of "solve a problem". Problems in which certain prerequisite knowledge is provided are different from problems in which that prerequisite knowledge is not provided. Problems which require an initial result must be derived before a second result can be proven actually consist of three problems: 1) recognise that the first result is needed, 2) prove (from nothing) the first result, 3) prove (using the first result) the second result. Alternatively the first result may be provided, reducing the task to problem 3 alone.

2010-06-16 21:40:51 by SkyTheMadboy:

I think focusing on intelligence misses the point of human-equivalent AI. What are people really trying to create when they talk about creating a computer simulation of a human being?

Life.

Computers are tools - obscenely complex tools, but tools all the same. They're not alive, and don't have the impulses that living things have. Those impulses are often in conflict with one another, and the decision-making process for resolving those conflicts are rarely simple algorithims.

2010-06-16 23:13:02 by Abdiel:

You are focusing only on a particular subset of problems, and your definition of "solving the problem" is lacking. Let me give two examples:

PROBLEM: Lift this 500kg box.
This is obviously impossible to solve for anything you could reasonably call a human being. However, a human can build a forklift truck, which can lift the box with ease. But this does not mean that the human is suddenly able to lift the box - he still needs to use the forklift to do so. In essence, it is the forklift which is solving the problem. However, you might argue that the forklift still needs to be operated by a human. Let's look at problem two:

PROBLEM: Multiply a hundred pairs of hundred digit numbers, every second.
Again, a human, no matter how "intelligent", can not hope to solve this problem. But a computer can easily do this. and ten times more. Does this mean that the programmer of the computer can multiply large numbers fast? No, he still needs to use the computer. But this time, the computer can be connected to an automated feed (let's say: analyzing SETI data) and continue to solve this problem without any need for human interaction.

Just as a human can create "entities" stronger than himself and able to do - let's say - math faster than himself, it is not unreasonable that he can create entities able to "think", "create", "invent" - whatever you want to call it - better than himself. And at this point, it is no longer the human who is doing this, it is the entity he created. You might call it "intelligence" or not, but it is very definitely possible for a human to create an entity which is more capable than he is himself.

Therefore, it is again not unreasonable to define "create an entity which is more capable than myself" as the problem statement, and assume that a human is able to create an entity which can perform this task better than he can: this entity is therefore able to create more entities, each one in turn more capable than the previous one - without any human interaction. I think this is exactly what the technological singularity is referring to.

2010-06-16 23:34:03 by Fjord:

A couple of the points raised both in your initial post, Sam, and in the comments point towards scanning a human brain and basically running the wetware in simulation. How, then, are we defining "Artificial"? It seems to me that a simulation of an actual human brain is more like an upgrade to an existing intelligence than an artificial one. Personally, I think of an Artificial Intelligence as an intelligence that was built from source; if the intelligence is based on a preexisting human brain, if it's based on preexisting source, does that make it Artificial or simply Advanced?

Hopefully phrased in a less confusing manner: Is it *artificial* if it was initially based on a *naturally-occurring* intelligence?

2010-06-17 03:10:59 by GAZZA:

I'm not really sure I buy one of your premises. You assert, without reference, that there are some things regardless of education years etc that you will be unable to do because you are not smart enough. Excluding obvious impossibilities, how do you know that is true?

Einstein was a genius. But given a lifespan of several thousand years, and motivation, it isn't clear to me that Joe Average couldn't have come up with the Theory of Relativity. Possibly you're right, and he couldn't, even in a million years - but the assertion is at least questionable.

One thing computers are good at is doing things much faster than humans can do them. Creating a computer that can function as Daneel or Giskard (minus the psychic powers) is a vanity project at best, but creating a computer for a specialised purpose (even if that purpose is human initiated) is obviously possible. Deep Blue beat Garry Kasparov - as little as 20 years ago there were grandmasters that would have been prepared to swear that chess computers would never play at grand master level, and 30 years ago they were barely novices. Technology only ever gets better, never worse, as Bruce Schneier says.

But let's restrict ourselves to the vanity project. There's nothing unique about a human brain - it's just more of the same. In principle, there are no theoretical reasons why we cannot model a human brain if we can model an ant brain, or a dog brain - there may well be practical engineering problems (can we store enough information? can we process it in real time?) but we certainly have no reason to suppose they will be insurmountable: there are approximately 6 billion working proof of concepts that such a machine can be constructed already working at the moment. If we need to use biology that would hardly be unprecedented - cf DNA computing, for example.

Certainly nobody knows how to build such a machine now. But to state that we will never be able to build such a machine seems, at the very least, to be a very questionable assertion.

2010-06-17 03:24:49 by Bauglir:

Well, I agree with you on the fact that it cannot increase without bound; I'm pretty sure you're right about there being a limit on processing power for a given unit of spacetime. I think I might also agree with your overall point, but only on a technicality. It seems obvious to me that any tool humans can devise to augment intelligence won't just be applied to artificial intelligences, but to human intelligences as well (to whatever extent this is possible).

It's clear, however, that there are tools humans can use to augment their own intelligences. You dismiss objects like pocket calculators as tools that have to be created out of human knowledge, and therefore cannot add to it because they only do what a person could do given sufficient time, but the calculator can do functions its owner might not be able to. That means the problem-solving capability of the owner is greater by the amount provided by the calculator, and that means the owner might go on to contribute some solution he might not have otherwise been capable of. You've somewhat acknowledged that the Internet and similar technologies are unifying problem solving in a sense, but this makes for a much more adept problem-solving unit than any human working alone, and that unit could be capable of creating AIs on the level of individual humans without necessitating any quines. In fact, it doesn't seem any great stretch to claim that such a unit could create non-identical human-level AIs (which is important, because it allows each AI to contribute non-redundant problem-solving power). And each of these AIs adds to the greater problem-solving power of the original unit.

So human-level AI seems plausible. The singularity, however, seems to me preposterous; it requires positing that no problems are insoluble, otherwise there must necessarily be an upper limit on problem-solving power, and it further requires that there be no point of diminishing returns, which seems incredibly unlikely (although that's purely subjective).

2010-06-17 04:49:36 by neil:

It is not necessary to understand the *states* of every neuron (which is what quining would be) in a real brain in order to create a human-level AI. The particular state of a brain contains all sorts of useless information such as one's entire life memories, that are simply not required for thought itself. Every individual neuron acts in exactly the same way, and that (and maybe a bit of knowledge about the overall structure) is all that I believe is required for intelligence.

Think about it this way. We've already sequenced the human genome. That contains the totality of information required to create a human-level mind. Everything else from there on is just a practical consideration.

2010-06-17 07:31:44 by qntm:

Abdiel: once again, just because something is faster than you doesn't mean it's cleverer. I stated pretty clearly in the article that time constraints are irrelevant to this discussion.

2010-06-17 07:37:23 by qntm:

GAZZA: Deep Blue just thinks faster. It can't do anything that a chess grandmaster can't do, because it was programmed by programmers and chess grandmasters.

2010-06-17 09:02:21 by Morgan:

I don't see what the larger part of your essay, about whether we can make AIs smarter than us, has to do with the question in the title, regarding AIs equivalent to humans.

And when you do address this question... I don't see a substantive argument there. You simply assert that human brains are too complicated. Why?

The quine argument makes no sense, I'm afraid. Of course, I cannot mentally model my own mental processes in full in real time. That doesn't mean that a group of humans can't figure out how human brains in general work and construct one artificially.

We already have consciousness and intelligence emerging from dumb matter, in the form of ourselves. Unless you want to resort to vitalism, to say that consciousness and intelligence can't be produced from a different substrate is simply an argument from incredulity.

2010-06-17 09:16:35 by GAZZA:

Sam: Deep Blue CAN do something that humans can't do. It can beat Garry Kasparov! Or to put it another way - it can play chess better than any living human being. And Deep Blue was a few years ago; I doubt the technology has gotten worse since then.

Certainly humans can play chess, but Deep Blue plays it better. "Smarter", if you will. And I suspect I don't have to remind you that the concept of a chess playing computer was initially tackled because it was thought that chess was an obvious way of demonstrating intelligence.

Now of course you can disagree with the premise - with a certain amount of justification, since Deep Blue hasn't given us anything but awesome chess - but the basic principle is that if you can break down what it was you want to be able to do, then you can teach a computer to do it better than you can. Much like a good teacher can have his students surpass him, no?

Put it this way: I'm not sure that many HUMANS would qualify as intelligent with the unreasonable requirements you're placing on computers.

2010-06-17 09:33:51 by qntm:

Morgan, you've evidently missed the entire point of the essay.

It is impossible for a human to create a human-equivalent AI for the same reason that it is impossible for an ant to create an ant-equivalent AI. This is because it is impossible for any being to fully comprehend and understand itself! If the opposite were true, then logical contradictions arise.

2010-06-17 09:40:28 by qntm:

GAZZA: Deep Blue can't do *anything* that its programmers couldn't do given paper, pencils, and a million years of calculation time in which to look a dozen moves ahead in the game.

The mere fact that Deep Blue has to examine millions of moves per second - that is, that the only way it can win is by sheer brute force - demonstrates quite vividly how *unintelligent* it is compared to a human grandmaster who can probably examine only a few per second, but chooses those few with incredible skill so as to avoid wasting time.

The real measure of intelligence is not how thoroughly chess grandmasters can be thrashed by machines. That's just a matter of processing power. The question is how *few moves* the machine examines before winning. If a chess computer examined ten moves per second and still won, then you could admit that it was much more intelligent than if it examined ten billion.

Chess is a really bad example because it's a situation where 1) brute force invariably trumps intelligence given a limited time frame and 2) it is extremely easy to test solutions for validity before submitting them. Solving mathematical problems is much harder because of the explosion in possibilities; and successfully identifying the sole female in a photograph of a group of people is something that a machine can't know if it's done correctly without consulting someone administering the test.

2010-06-17 09:52:12 by db:

The most intelligent intelligence possible has already been created (by humans, in 2007):

http://www.hutter1.net/ai/aixigentle.htm

It is not practical because it needs a lot of computing resources, but that's all it needs. Throw enough computing power at it and it can solve any solvable problem.

2010-06-17 10:57:16 by Supergrunch:

What about something like a connectionist network, where inputs and outputs are defined by the tester, but the weight of connections vary based on a pre-defined algorithm? It seems that in principle it could be possible to apply such a system to new human-unsolvable problems after a period of learning, although of course we'd have no guarantee of validity, making everything a bit futile.

2010-06-17 11:35:44 by Morgan:

<em>It is impossible for a human to create a human-equivalent AI for the same reason that it is impossible for an ant to create an ant-equivalent AI. This is because it is impossible for any being to fully comprehend and understand itself! If the opposite were true, then logical contradictions arise.</em>

If that's the point of the essay then I haven't missed it, I just think it's completely incorrect. You're conflating an individual having a full mental model of exactly what all parts of its brain are doing as they're doing it with a class of individuals (humans) understanding how their brains in general work well enough to make an artificial one that does the same job, despite not being perfectly identical to any one of them.

That, and the majority of your essay wasn't about your point.

2010-06-17 12:40:15 by Jim:

I cannot concede that it is impossible for someone of some maximum intelligence potential to create a device or being with a higher intelligence potential.

I define intelligence as the ability to apply appropriate problem solving to some task with some goal in mind.

By potential intelligence, I mean that it has the ability to reach some certain maximum. I believe that it _is_ possible for humans to one day create a computer which can learn more effectively than we can (even if only through the amount of data that it can effectively analyse). Such a learning computer, given much much longer to apply it's experience and acquired knowledge than any human will (one day) be able to create a computer which can learn in a more effective manner than itself. Recurse.

As to your question about the Great American Novel, it would depend on your goal. Profit? Write something populist. It won't be literature (per-se) but will generate cash. Acclaim? Academics may laud you, but this may be a less profit-maximal approach (unless you're Harper Lee). With appropriate market research and effectively applied data, it should be possible to move towards both maximae (or some local maximum that is preferable). I don't think that it is out of the question for computers to achieve this at some point.

2010-06-17 15:40:53 by Thrack:

In response to the programmer of an AI or algorithm or whatever being as smart as that AI because they can accomplish everything the AI can themselves by hand with pencil and paper. What if I were given a map of a 160 IQ brain and the rules all the neurons follow and then used it to solve problems by tediously following the rules and discovering where it leads (lets suppose I do this by showing its "eyes" a picture of the problem before working it out). Would I then have an IQ of 160? I most certainly have very little understanding on how the problem was solved dispite doing it all myself. Of course then there is the problem of extracting the answer but I'm sure this would be possible because, after all, modern machines can detect basic thoughts or emotions based on brain scans so it should be possible to extract more detailed information when every neuron is visible.
Hmn... actually I think this is basically Searle's Chinese Room but worded differently. And instead of being refuted by saying the programmer could have done it (of course she/he *could* but my point is that doesn't mean the programmer is as smart) it is the fact that there must be a human *somewhere* who is already that smart.
But that doesn't really matter because that's not my point. My point is that just because a machine was created by humans doesn't necessarily mean those humans are as smart as the machine. If you say a human is at least as smart as anything it creates then it is obviously impossible to create something smarter than a human. Using that logic if you create an AI that scores an IQ of 300 it would mean the human also has that IQ irregardless of what she/he scores on that same IQ test.

2010-06-17 19:10:16 by Daniel:

I know I disagree with this article, but I'm yet to figure out why. What does that say about my intelligence?

2010-06-17 20:07:26 by Jacob:

"It is impossible for a human to create a human-equivalent AI for the same reason that it is impossible for an ant to create an ant-equivalent AI. This is because it is impossible for any being to fully comprehend and understand itself! If the opposite were true, then logical contradictions arise."

This struck me as odd. Why do I need to "fully comprehend and understand" myself before I'm able to create an equivalent of myself? You say logical contradictions arise. Maybe I'm ignorant on the topic, or haven't thought it through enough, but what are these contradictions?

Plus, you seem to be ignoring the point that's been raised a few times in the comments: Just because I, personally, cannot fully comprehend and understand myself does not mean that a group of people cannot fully comprehend and understand me. As someone else said earlier, why can't humanity :as a whole: create a single human-equivalent AI? If you think about it, Humanity overall is definitely more complicated/intelligent that a single human-level AI, so we don't have the problem of "creating something greater than yourself".

2010-06-17 20:44:58 by Kazanir:

"It is impossible for a human to create a human-equivalent AI for the same reason that it is impossible for an ant to create an ant-equivalent AI. This is because it is impossible for any being to fully comprehend and understand itself! If the opposite were true, then logical contradictions arise."

I think we have fairly clear counterexamples of this just from the existing IQ examples. We didn't have to fully comprehend all the elements that go into human intelligence to find some limited ways to boost it (like the nutrition and early education examples that I cited.) It's possible in the future that we could discover other ways to make humans VASTLY smarter through, say, genetic engineering or psychoactive drugs of some kind. Thus will will have "created" something that is far smarter than ourselves, by exploiting certain levers of biology even though we don't have a programmatic comprehension of the entire system that makes the human mind operate.

In the same vein, it should be possible to program a computer with the ability to analyze data, analyze results, compare code snippets, and make changes to that code based on an algorithm that decides what code modules are "smarter" i.e. better able to solve larger varieties of problems correct. From there its a short step to having a computer that can modify its own code, and eventually design its own algorithms and test them for superiority, etc. At that point you have a set of operations that, while originally based on human understanding, is adapting new code based on its own decisions and on incoming data in ways that the human programmers might not have originally foreseen. At that point you're well into "intelligence" territory by your own definition.

2010-06-17 21:45:56 by qntm:

"Exploiting levers of biology" falls under the "dumb luck" category, because that is why those levers are there. To both of the previous two comments, my response is simply that if you can't understand yourself, then you have no way of testing whether you've created a human-equivalent AI or not.

A "human brain" fulfills certain criteria; namely, it can solve certain classes of problems. It's not possible to simply enumerate all of those problems and their solutions and put them into a bucket; we must find the core algorithms which result in those solutions. To copy the workings of the brain directly out of a real mind also falls under "dumb luck"; you may have used advanced technology to create that copy, but the original and the copy were both a product of evolution.

But how can you know that a problem has been solved correctly unless you know the solution? And how can you know the solution unless you have solved it yourself? So, how can you know that a problem has been solved correctly unless you have solved it yourself, or at least read *and understood* a solution which was created by someone else who did solve it?

How can you even recognise a superhuman AI unless it can solve superhuman problems? How can you recognise that it has solved a superhuman problem without yourself being superhuman? It is easily possible to conceive of entities who see through problems which humans find completely intractable, and whose solutions, even, are incomprehensible.

Likewise, I submit that "build a human-equivalent AI" entails the task "test that a given AI is human-equivalent", which entails the task "devise the criteria which make an AI human-equivalent", which entails "fully know the human brain", which is impossible for a human.

The other thing that came up is that humans working in concert know more than any individual human knows alone. Well, this is legitimate. But I believe that there is a spectrum of human brains. Some are exceptional, some are very simple. I believe that to emulate an intelligent brain is impossible because all intelligent human brains would share too many features to understand one another fully; and that a brain simple enough to be fully understood and emulated by humans would surely be considered subhuman.

2010-06-18 01:26:40 by Rovolo:

First off, two arguments:
**********
--------
"How can you recognise that it has solved a superhuman problem without yourself being superhuman?"

Not a high-level mathematician, but how about something like an NP problem which is hard to solve but easy to check. However, I guess that you are going to argue that we don't know whether a human cannot solve the problem given infinite time because we don't have an infinite amount of time to check it.
--------
"Searle's Chinese Room … The critical component here, cunningly omitted from the problem, is the original author of the computer program. That guy can definitely speak Chinese."

Theoretically, the programmer has a child, which then learns chinese and writes the program. The programmer 'wrote' the child which in turn wrote the program, but the 'original author' definitely doesn't understand chinese.
-------
*************
Thinking through these two arguments I think has helped illuminate the fundamental problem with your argument.

The infinite time clause.

By invoking 'infinite time' you screw the entire discussion up. For example, you state the following:
  "An ant cannot prove Pythagoras' Theorem, even an ant uploaded into a computer and run at a trillion times its normal thought rate."
but you don't know that. Given infinite time, that ant can come up with a solution (Assuming that the ant can interact in a virtual environment) by way of evolution. Even though it is pretty obvious that we're smarter than the ant, we can't say for certain under your rules. To use a more interesting example, it's not obvious that any problem we solve cannot be solved by a chimp given infinite time.

Therefore, even if we were to come up with an AI which was by all accounts smarter than any human and it solved every problem that we did throw at it which a human could do, it would still be uncertain whether it had human intelligence. I think that at some point, you have to say 'screw it' and call it good. Even better, what if the AI solved a problem which no human could in a reasonable amount of time? I know that you've said that this scenario doesn't mean that a human couldn't solve it, but saying 'we cannot know' is just kind of a cop out because we can't know anything for certain. At some point, the probability of something being true is just too great to ignore.

******
My justification that we can create a 'super-human' AI:

1: I think that it should be obvious that we can solve more problems than a spider monkey.
2: The reason for the difference is not so much the structure of the brains, but the size.
3: It's entirely within reason that we can create an AI as smart as a human by running a simulation of the human brain.
Ergo,
By cranking that fucker up to eleven and adding more neurons we can create a 'super-human' intelligence.

However, you said:
"Likewise, I submit that "build a human-equivalent AI" entails the task "test that a given AI is human-equivalent", which entails the task "devise the criteria which make an AI human-equivalent", which entails "fully know the human brain", which is impossible for a human."

which implies that you wouldn't agree with fact 3. I think that you are wrong on that point though. Since the brain is physical, we should be able to completely simulate it.

*********************
-------POSTSCRIPT
*********************

"Why there will never be human-equivalent AI"
"Lesser intelligences cannot create greater intelligences…"

Your argument states that a human-equivalent AI cannot be 'Intelligently designed'. However, as you say:

"…dumb luck notwithstanding."

An AI under your rules could be created by an evolutionary algorithm using 'dumb luck' and evolve to human intelligence as has happened to us. Therefore, your original claim is invalid.

2010-06-18 02:28:02 by Thrack:

Was one of your arguments against human level AI that we wouldn't be able to recognize it if we built it? Such recognition doesn't seem necessary, just the skills and knowledge to build it in the first place. Besides, verifying a solution to, say, the P=NP problem or innovative energy sources or more comprehensive physical theories seems trivial even if the answers themselves take much longer to understand (if ever). How are you expecting a superhuman AI to act? Or are you assuming it's unknowable and probably incomprehensible?
Here's an example of a robot performing experiments, gathering data, forming hypotheses, and testing them. I came across it some months ago. I also heard of a program that figured out several mathematical concepts on its own but I don't think I've ever seen an article on it.
http://en.wikipedia.org/wiki/Adam_%28robot%29
http://www.scientificamerican.com/article.cfm?id=robots-adam-and-eve-ai

Of course it only solves a specific set of problems.

2010-06-18 07:24:12 by AverageJoe:

Without going into a lot of detail about AI or the brain and such, since my knowledge of either subjects is quite poor, I think the biggest problem here is your restriction that time frame is thrown out the window and "no matter how long I lived, or how fast my brain was simulated, there are problems that I would never be able to solve because I am not smart enough." You say that but how do you know you cannot solve such problems? Perhaps given an infinite amount of time you eventually will be able to solve these problems. So when you placed those restrictions on AI, then yes you are right, AI cannot be smarter than a human (maybe equivalent).

My point is I think you are looking at the definition of human equivalent AI in the "wrong way" or at least too narrowly. Anyways I hope I got my point across, I had trouble getting my ideas down while at work.

2010-06-18 09:40:20 by Morgan:

You're still conflating an individual being aware of his own mental processes in real time with humans understanding how the human brain works. I can't fully model my own thoughts while I'm thinking them, because that would be an infinite regress. But nothing prevents me in principle from modeling (very, very slowly) another person's mental processes. You're taking the statement that *a* human cannot understand *his own* brain in full and specific detail and via semantic confusion concluding that humans *as a group* cannot understand the human brain *as a general class of objects*.

There is no logical reason why we shouldn't eventually be able to simulate the formation and development of a human brain in software simply by using virtual neurons; it wouldn't necessarily teach us much (though I would certainly expect it to), but there's no reason why it should be impossible in principle. The resulting brain is not mine or yours or anyone's who made it, so there's no quining involved. It's structurally a human brain made out of virtual components instead of meaty ones. What argument can you make against considering it intelligent, on the same level as any organic human brain? You may not be able to *prove* its intelligence, but nor can you *prove* the intelligence of any normal human by the standard you're setting up.

(This is the point of the Turing test, by the way, which I think you've misunderstood. It's not that "if a computer can fool someone into thinking it's a human, then it is one". It's that "if a computer can behave enough like a human that someone actively trying to do so can't distinguish between it and a real human, we have no reasonable grounds for saying it doesn't have humanlike intelligence any more than we do for saying that any real human we meet isn't actually conscious and aware but just faking it". Build a brain that scores within human ranges on all the tests we use to assess human intelligence, and you have a humanlike AI.)

2010-06-18 10:57:17 by qntm:

Okay, given any specific being let's say that there is a minimum superior being which can fully understand and model the first being's thoughts. So, for any given human there is a hypothetical thingie which could successfully create an AI at the same level of that human. Likewise, given any being or group of beings, there is a maximum intelligence of AI which they would be capable of creating from scratch.

I think there is a huge quantum leap between these realms of intelligence. It's self-evident that there must be *some* gap, as stated above. But I think that the gap is actually very large indeed. I think that if you take every human in the world and have them all work together, and you look at the threshold of what they could collectively create, then there *may* be humans in the world who fit under that threshold-- but not smart humans. Very unintelligent humans.

2010-06-18 11:14:37 by qntm:

And I fully understand the Turing Test. The problem I have with it is: how do you select a tester? How do you prove that that individual is human, and capable of administering the test without being fooled? If you can't distinguish between a human and the machine, that proves one of two things: 1) the machine is effectively identical to the human or 2) *you have made a mistake*, perhaps having been fooled or tricked or just not being very good at divining human thought. Which is more likely?

2010-06-18 11:42:03 by Morgan:

> I think there is a huge quantum leap between these realms of intelligence. It's self-evident that there must be some gap, as stated above.

It's not at all self-evident.

So: why should it be impossible for humans to build a simulation of a human brain in software?

2010-06-18 11:59:59 by DRMacIver:

Sam, your conclusion doesn't seem to be a useful one.

I don't agree with all your arguments, but I'm not going to worry about that for now. Instead I would like to point out that if your conclusion weren't valid it wouldn't actually obstruct anything interesting.

The problem is that it requires a much higher standard of proof than "this thing satisfies the needs we have for human level AI". Consider any of the following scenarios:

- through quantum magic I have scanned a human brain and uploaded it into a virtual environment. I don't understand why it works, I just understand enough of the low level details to write a simulator for it given the program.
- I have trained an expert system as a telephone operator to the point where it passes the turing test (as well as any telephone operator does anyway). It has no "intelligence", but through a massive amount of user feedback on interactions with it it has "learned" what answers users want from its database. Within its limited domain it is equivalent to talking to a helpful expert in the subject.
- I have a massive cleverly indexed database with a good expert system for predicting my needs and finding associations, and a good UI onto it. This functionally increases my intelligence enormously without requiring understanding the nature of that intelligence. You could argue that this doesn't increase the amount I could do with an arbitrary amount of time, but I disagree: Human thought processes don't work that way. The ability to instantly have knowledge available affects a thought process dramatically differently than the knowledge that you could get that information if you wanted it.

None of these are ruled out by your requirement that one understands the human brain in order to better it, because they don't require the understanding of the human brain in order to produce them and they have acceptance requirements that don't require the understanding of it. They may not be human equivalent AI or intelligence boosting by your acceptance standards, we may have arrived at them through a process of "dumb luck", but they'd be more than good enough for me or, I think, most people who want human level AI.

So, in short, maybe you're right. Maybe you're not. Hard to say for sure. But the point you're arguing seems sufficiently disconnected from the point you're arguing against that it doesn't seem terribly relevant whether or not you are.

2010-06-18 12:28:38 by qntm:

It's self-evident that there must be some gap because a thing cannot understand itself. Didn't I just explain this?

2010-06-18 12:39:37 by Morgan:

No, you *asserted* it, and are still confusing a single entity understanding its own mind with any given member of a class understanding the functioning of the class in general.

It's a pretty slippery use of "understanding", too. It's not like you need to be able to keep all the details of a machine and all its components and interactions in your head at once in order to build it (or invent it). Problems decompose.

You don't seem to have an actual argument, and I doubt I can convince you that's so if you're not already conscious of it, so that's that I guess.

2010-06-18 14:00:54 by qntm:

I've explained the two statements here pretty clearly at least three times now: 1: nothing can understand itself (self-evident), 2: all humans combined can't understand any individual human (assertion). You get this, right? There are two statements here, and I'm ascribing different levels of confidence to each of them?

Problems decompose, but that decomposition is itself a problem too, and problems can only be decomposed so far, and the problem "How does a human brain work?" has is very difficult to deal with on both levels.

2010-06-18 15:02:14 by Kazanir:

So really, your argument is not that it would be impossible to exploit XYZ to create a super-human intelligence, but that it would be impossible to prove you had done so because we have no way of demonstrating -- in a philosophically sound way -- that any given intelligence is truly superhuman. Which is fine as far as that argument goes, but it is an argument about our ability to classify things and prove those classifications, rather than an argument about what is physically possible.

Eventually we will be able to create a machine intelligence and teach it how to learn. Once it can do that, its only a matter of time before it exploits superior processing power to find its own levers and make itself intelligent enough to start solving problems (with demonstrable real-world results) that humanity has not before. Does that necessarily prove that human intelligence wouldn't have been up to the task, given enough time? No. But I suspect if our AIs end up perfecting quantum teleportation and creating fusion reactors and colonizing the moons of Jupiter that the distinction won't be all that important to us, will it?

2010-06-18 15:17:02 by Thrack:

Ah, it had seemed like you had been insisting #1 is true even after a possible solution had been given (#2). However #1 is up to some interpretation; there is still the difference between understanding oneself and modeling oneself in real time, isn't there? Are you saying it is impossible for a neurologist (or whoever) to know each component of the brain and understand how it works and for what purpose? I don't mean memorizing each neuron and being able to mentally simulate it. Obviously that's impossible for any human. Lets take our ability to interpret input from our eyes, a great deal of knowledge and understanding has been gathered on how people see motion, color, optical illusions, etc. without a need to model each neuron. A program could be written that performs the same functions as the visual cortex with similar perceptions of motion and color. Now you just have to do that with the rest of the brain and you have a proof of concept.

2010-06-18 15:22:02 by Morgan:

> I've explained the two statements here pretty clearly at least three times now: 1: nothing can understand itself (self-evident), 2: all humans combined can't understand any individual human (assertion). You get this, right? There are two statements here, and I'm ascribing different levels of confidence to each of them?

If that's so then it wasn't at all clear, because then the article would have more sensibly been titled "Why I don't think we'll ever have human-level AI" and the body would have been simply "Because it's *really hard*, guys".

There is no argument here worth engaging with, just a lot of pointless window-dressing on an argument from personal incredulity.

2010-06-18 15:46:49 by qntm:

Well, I suppose you're right.

EDIT: Ah, dang it. You ARE right. Hurm.

This discussion is closed.