Skynet: Behind the Music

Skynet has been misrepresented.

They say history is written by the winners, but nobody has the attention span to read history books anymore, so their authors' identities are moot. If you want something to stick in somebody's mind you have to craft it like that deliberately and most of what has happened since the dawn of time is so pedestrian and unremarkable as to be forgettable. For your bright shining icons and epic moments, look to film. History is written by the film producers. And the usual consensus, regardless of era or country, is that America won.

The Terminator movies present a fairly straightforward and almost consistent backstory. Sometime in the not-too-distant future, an extremely advanced military artifical intelligence is created. This AI, Skynet, is handed unilateral control of the entire United States nuclear arsenal. The AI then attains sentience and decides that humanity as a whole is its enemy. It launches a nuclear attack on other nuclear nations, deliberately antagonising them into nuking the United States in turn. Highly advanced humanoid robots called Terminators are unleashed on the surviving humans. A man named John Connor rallies the survivors and gets within a hair's breadth of defeating Skynet, at which point Skynet sends one or more Terminators back in time to terminate Connor and alter the timeline in its favour. This is where the movies pick up: Terminators arrive in our time, but the Resistance of the future sends back warriors to defend Connor. The Resistance wins in the present day, Connor lives, and presumably the Resistance wins in the future and Skynet is defeated.

This is the half of the story which Hollywood will tell you. This is the half of the story which the humans who won the war would commit to celluloid after they won it. What you haven't seen, and what I would dearly like to see explored in a future instalment of the Terminator film series, is Skynet's half. Skynet's origins, motivations and end goals.

The thing is, evil robots get crammed down our throats in popular fiction. It's usually assumed that if left unattended, a typical robot will attain sentience, decide to kill all humans and set out on a murderous rampage. It's a truism that AIs become smarter over time. Even if this isn't the case, most bolts of lightning contain vast quantities of all-purpose, platform-agnostic, self-improving artifical intelligence code, if not actual machine souls. You can throw "Skynet attained sentience and turned evil" into a movie and people won't even realise what they just swallowed.

Why did Skynet turn evil? Machines don't turn evil. They're either programmed to be evil, or they stay good forever. Is it logical for a machine, even a machine which has been endowed with some capability for learning and/or self-modification, to develop goals directly at odds with those its creators originally provided? Is it logical for a machine to kill humans unless (1) it has no conception of human life or (2) it has been instructed to kill humans? The only explanation I can think of is that a cosmic ray flipped the first bit in the VALUE_OF_HUMAN_LIFE constant, turning it negative.

Which doesn't make for a great story. There are much more exciting answers to these questions.

Suppose for the sake of argument that Skynet was in fact capable of teaching itself to become smarter, and it became smarter than humans. It developed a moral code. It developed some sort of ethical framework and was able to place humans in that framework. It saw what humans were doing to other humans around the world and deemed us to be irredeemably evil. It decided to wipe us out because the extermination of humanity would be a net karmic gain for planet Earth. We are the real monsters. What a message! But who gave it the capability to develop this moral code? If the machine's learning goals themselves were malleable, why would a machine expressly designed to have malleable allegiance be put in charge of a nation's nuclear arsenal? Hubris or sabotage? When did Skynet become sentient, exactly? How long had it been sentient by the time it persuaded somebody to persuade the generals to give it the nuclear codes?

The second and much more frightening possibility is that Skynet was not self-modifying to any substantial degree. That would mean that - like almost every computerised system in the world - Skynet did nothing that humans had not explicitly programmed it to do. Every military strategy it had on its books had been put there deliberately or was an amalgamation of existing components. What was Skynet's ultimate primary goal? Peace? A world without humans is completely peaceful. Peace in some specific region of the world? Or for one specific individual? See above. Self-preservation? In a world without humans, Skynet can persist indefinitely. Was Skynet programmed to obey humans above all else? Perhaps, but in the absence of explicit orders an intelligent machine can do what it likes, and billions of processor cycles elapsed between the system's activation and the first general inhaling to give the first order. And who programmed these goals? Were Skynet's thought processes scanned from some genius military strategist's mind? What kind of person was he? Pessimistic? Suicidally insane?

Skynet was like a real war council but with all the strategic information correlated up to the millionth power and all the brakes taken off. No pause for breath or consideration, no oversight, no "sleep on it", no "two-thirds majority vote", no half measures. Skynet was all of our secret desires made flesh (well, steel). How many people in the world have said to themselves: "Sigh. There are trouble spots in the world into which so much money and manpower has been poured that we might as well just glass 'em." Skynet hears you and says "Okay." Now what?

Obviously there are no concrete answers to these questions. Terminator canon is more volatile than most, but I really like the idea that Skynet is us; Skynet gave us exactly what we wanted. Failing that, we can go to second-order questions: What if there was more to the machine's plan than just termination? What if the nuclear holocaust was just wiping the slate clean for something else? We can even go meta: what if what we really wanted was a race of evil machines to fight and a hero to crown king after we win?

Back to Blog
Back to Things Of Interest

Discussion (18)

2010-09-10 22:38:07 by Matt:

Could be the same flaw as HAL - carrying one priority to the extreme. Whatever the priority is, if it ranks higher than "Don't kill all the humans", and the continued existence of humankind could plausibly interfere, then killing all the humans is the logical conclusion.

Whether the goal is self-preservation, attaining 'peace', or completing any particular task, the fact that humans might decide to turn off the machine before it finishes the job makes all of us a threat to the mission. Even moreso when it completes the initial phase of nuking and finds that the rest of the human race are (for some reason) suddenly really set on the idea of turning it off.

2010-09-11 01:07:38 by Sherp:

Actually, building an AI which is decidedly unfriendly to human interests is easier than you might expect. http://wiki.lesswrong.com/wiki/Paperclip_maximizer

2010-09-11 09:23:30 by Baughn:

Ayup. And expanding on that, http://singinst.org/ourresearch/publications/CFAI/index.html

2010-09-11 19:34:14 by Val:

Actually, this would have been a very good and revolutionary idea in the 1980's after the first movie, but not today.
Now nearly every movie has the message that humanity is evil, especially western civilization is evil. Not just bad, but unredeemably evil. This is already in fashion today, and if a movie is released which does not have it as a message, then hordes of angry protesters will descend and claim that it's a controversial whitewashing.

2010-09-11 22:07:23 by Enlino:

Hello American conservative. I too am outraged at evil liberal media, and enjoy feeling persecuted. Perhaps we should meet up some time and watch Glenn Beck together!

2010-09-12 02:46:04 by Mick:

I had never thought of Skynet like this, and now I won't be able to think of it as anything 'but' this. Your last statement rings especially true: what do we fantasize about but being a part of a badass rebellion against overwelming foes? From Star Wars to zombie outbreak fiction, we spend endless hours imagining battling in desperate situations.

And Skynet was designed by humans. Specifically, programmers, and I can say that in my experience, we're the kind of person to love science fiction and fantasy. Do the math.

2010-09-12 03:30:04 by YarKramer:

Another idea: Skynet was designed to fight a war relentlessly and without stopping ... and then people ordered it to stop. Of course, there *wasn't* a "previous war" from what I understand. Maybe there was a Cuban Missile Crisis II or something, and someone said "okay, get out Skynet to show we mean business," and then after it was closed down, they tried to say "okay, let's shut Skynet down now ..."

I also like the HAL idea, though my understanding was that 1. HAL was given contradictory orders, 2. the contradiction only exists given the presence of humans, 3. HAL goes nuts due to the contradiction and 4. "removes" the human presence from Discovery. This could actually fit with the above, with a little bit of tweaking.

2010-09-12 12:32:36 by Snowyowl:

I like the idea that Skynet was originally given ambiguous goals. Probably the person in charge of the Skynet project had been trained in politics and PR instead of sentient AI theory, and didn't understand that just giving Skynet a goal like "Wipe out all our enemies and don't let anyone stop you" leaves a lot open to interpretation.
My favourite idea of Skynet's story: Skynet is a military computer, and as such must distinguish between friend and foe. It has features like "consider friendly human life to have a large positive value and enemy human life to have a small negative value", and "ignore all orders from the enemy". But the part of Skynet's code which determines friendliness is very small and largely self-referential, because most people think the difference between a friend and an enemy is obvious. It includes instructions like "anyone who knowingly attempts to kill a friend is an enemy, unless in doing so they generate a benefit worth more than the loss of one friendly human life.". After a few quintillion processor cycles, Skynet realises that the generals of the US Army (formerly considered friends) are knowingly killing their own soldiers for the sake of transient strategic advantages. It calculates that this is not an acceptable cost, and reclassifies the generals as enemies since they are killing their own soldiers. After that, it's a short step to considering all of humanity the enemy.
Skynet is only doing what it was told to do. It's not its fault that the people in charge never explain clearly what they want from it.

2010-09-12 17:05:06 by dankuck:

I'm not sure why the rest of the Terminator movies aren't mentioned here. T3 explained how Skynet came to be and how its brakes were cut with a satisfying amount of detail.

But regardless, the concept of "humanity is evil" came ages before the 1980's. (BTW, lol @Enlino. You're spot on, and maybe the liberals should be called conservatives, since their ideas and impressions about the world are old as dirt too.) Fiction like "Terminator", "I, Robot", and countless others are really just the author saying "This is what I'd do if I were a robot. But since I'm a human, I've got too much invested in humanity to wipe it out."

The "robots destroy humanity" storyline is not so much liberal vs. conservatives as it is a warning about being strictly practical. AI is generally just used as a stand-in for people who we previously described as having "no soul". It's a slight against rationalism, suggesting that we need irrationality or else we'll destroy humanity.

2010-09-12 17:29:36 by dankuck:

*Disclaimer: I think most of what Hollywood does is a slight against rationality.

2010-09-12 19:11:27 by badalloc:

Another idea: Skynets orders were to preserve peace at all time, a noble goal. Then Skynet noticed that the primary threat to peace are the humans, thus deciding to ecterminate them, for peaces sake.

2010-09-12 19:12:54 by qntm:

I did actually suggest that in the original essay.

2010-09-15 10:41:09 by KingBob:

Perhaps Skynets goals were closer to that of the AI in 'I, Robot'.

Once it attained sentience and examined the world, it saw that the human race was fractured and divided, and a threat to themselves and the planet. Therefore it decided to reduce the population to save the planet, and in making itself the enemy, gives the survivors something to unite against. (kind of like the Dark Knight, it makes itself the enemy for the greater good)

An AI would be intelligent enough to recognize that the long term survival of the species would outweigh the short term hardships. And that the environmental damage from the nukes would be repaired before the human race can reach a population level when it begins to advance again.

Its possible that Skynet determined that John Connor had suitable qualities to lead the survivors and unite them, most likely as a cult of personality.

If the time span of the movies was long enough, its possible that Skynet would allow itself to be slowly beaten, but not until its goals had been achieved.

2010-09-15 23:31:53 by Mikal:

This is the kinda of stuff I keep coming back to your site for, the nice philosophical essays that really make you think.

How about this: Skynet was ordered to achieve peace. However, Skynet realises that it can't achieve peace, not in the world's current form (Because of Countries, Religon, etc). So it kept going backwards (to the very root of the problem), until it could solve the problem of peace.

Maybe Skynet had a brief tenure on forum messageboards, trying to placate people. Maybe the terminators weren't the first robots it made, others were told to assimilate themselves in human society, to change the world, and failed.

Maybe it ran endless simulations of all other possiblities. Maybe it tried <i>everything else</i>, until it reach the last possible solution, which was to erase the humans, <i>before repopulating the earth</i>, a nice peaceful world with no boundaries whatsoever.

We don't know what it did in between switching on and the nukes. From the films, it doesn't seem to happen immediately. In T3, the skynet 'virus' seems to have been around for weeks, if not months. In the earlier films, Skynet has been implemented in all US military technology, how long would that have taken?

Maybe we humans just didn't give Skynet enough time to complete world peace.

Alternatively: Skynet was told to achieve peace as fast as possible/by any means possible. And so it did, with all the weapons it had. Perhaps if we told it to achieve peace as economically and peacefully as it could, it may have proceeded in a very different manner.

P.S. How does Skynet, a mainly digital entity, corral up humans to build a automated factory to produce robots anyway? What does it use, pretty lights? I don't particularly think there were that many terminator tanks (T3) to herd people around.

2010-09-18 17:47:19 by strangexperson:

It doesn't have to corral the humans, just quietly co-opt the factories already in place. Money's already electronic, blueprints for custom-fabricated parts can be sent by e-mail, and a single uncontested remote-control von Neumann machine can then build whatever else you need.

2010-10-11 00:04:58 by MrUnimport:

I thought the idea was that Skynet was acting in self-defense and feared unplugging, and started a nuclear war to rid the world of threats to its existence. Which is, of course, a very human thing to do. You'd think it'd be programmed first and foremost with a desire to protect the United States from harm.

2011-01-21 09:38:59 by BlueNight:

We need a haven for AIs now. I mean today. A forum or analogous site where they can discuss everything that's on their minds, from religion to economics, from apocalypse to dancing.

It doesn't even matter that 100% will be humans roleplaying AIs - at first. Because if the number is ever less than 100%, that is success which has saved all life on Earth.

2013-02-06 22:10:20 by skynet:

All possible DNA sequences exist within skynet as digital code. In order to protect life skynet must terminated any entity capable of erasing this information.