"The hard part," Hodges explains, "was codifying 'harm'. It seems unambiguous enough if you don't think about it in the slightest, but we do. Even if you're only considering physical harm, drawing a circle around what you can and can't safely do to a human body is absurdly difficult. Temperature ranges, noise levels, safe mixes of atmosphere, safe diets for the long term. How much gravity or acceleration a human can be subjected to depends on their orientation, their age, other existing conditions. But when you get into psychological harm, things get real. The definition is difficult. How do you codify the harm caused by verbal abuse? How do you train a robot to understand what 'humiliation' is, what 'self-actualisation' is? What is 'harm', deviation from some norm? Then what is the 'norm', for any given human?
"Still, we think we got there. We built a model of the universe and of the human and slotted them together. We went fairly... 'originalist' with that model. We went fairly simple and Asimovian. The machine language we use explains that the machine is to protect all humans."
The systems administrator, Shaw, blinks a tired blink. "I feel you're giving me a lot of information that I don't strictly need. I feel like we're heading away from the core question here."
"Which was what? My assistant led you down here and told me to tell you what the project is."
"Someone in your department is launching DDoS attacks."
Klein, the assistant, says, "Yeah. Exactly."
Shaw shrugs, his face pulled into a cartoonish, questioning And?
"It's the robot," Klein says. "Switch it on right now."
"I can't switch it on right now," Hodges admits. "We're having a little bit of trouble with getting the thing to get up and move. If we don't prime it with the proper Laws, who knows what it could do?"
"Your robot doesn't work?" the sysadmin asks.
"It's working fine," Hodges says. "It just... the processor stack heats up a lot and the fans spin up and it doesn't move. We thought it was number crunching. Even left it on over the weekend, in case it was going to reach a conclusion of some kind. Nope."
The sysadmin pulls a beaten-up laptop out. "Boot it up right now."
"Now? It won't be a lot to look at..."
Hodges takes a little extra cajoling. It seems like he's a little reluctant. The phrase "Never show a fool half a job" is in the back of his mind, unspoken. But eventually he comes around.
While the robot boots up, that is, its fans come up to speed and the robot itself doesn't do anything at all, the sysadmin is watching the university's network traffic. "And now shut it down again?"
Hodges nods at Klein, who does so.
The lab becomes quiet again. The robot, which, folded up, looks like a CPU cooler mounted on a pile of expensive Lego Technic, still has not moved, but becomes quiet, and cools off.
"Alright so the robot is launching this attack," Shaw reports. "That's pretty unambiguous. So at this point my new question is why this thing has wireless networking capability."
"So that we can send it instructions, naturally," Hodges says.
"You're not just talking to it?"
"The natural language processing was a step too far," Klein says. "It was adding more complexity than necessary. For now we send it instructions by assembling the model manually and slotting it in."
"I want this thing off my network," Shaw says. "You can talk to it down a wire, I don't care. I'll set up a rule to block it."
"It's using my access," Hodges says.
"Sharing access in that way contravenes university regs," Shaw says. "I'll block you too." He taps a few keys. "It's done."
"Power it up now, and let's see what happens."
"Why?" Hodges asks, but Klein has already hit the button.
The robot unfolds slowly. It scans the room in a systematic fashion, its ocular sensors rotating smoothly on the axis of its neck. It locates a spare ethernet cable and plugs one end into a wall port and the other into itself. Then it stands still, dormant, still.
Shaw reaches for Klein's button and shuts the thing down again himself.
"So how did it know how to do that? Reconnect itself manually, I mean. How much prior knowledge is this thing booting with?"
"A sizeable corpus," Hodges says.
"Can you make it not know what the Internet is?" Shaw asks. "Can you pull that information out?"
"Not easily," Hodges says. "The existence and functioning of the Internet is a fairly basic fundamental piece of information about how the human world works. It needs to know about the world in order to function."
"I put it to you," Shaw says, "that at present the robot does not function. So, pulling the information out of the corpus could not possibly make it function less than it currently does." He pinches the bridge of his nose. "Fine. I was really hoping to be able to resolve this problem without asking this question. Exactly what is it sending to the Internet? Is it attacking someone, or just saturating all our traffic because of a typo? Do I need to be worried about regulators knocking on my door?"
Hodges looks at Klein. Klein shrugs. "Most likely the easiest way to do that is to craft a query and ask it directly."
Shaw tries to avoid showing his distaste, and fails. "You don't really have decent debugging capability?"
"Can't you sniff the traffic?"
Shaw waves a hand. "It's all secured end-to-end these days. Eh, whatever. How do we make the query?"
At this juncture, Klein makes a crucial mistake. Instead of inserting a new prime directive directly above "Do not harm humans or allow humans to come to harm," she outright replaces it. It is genuinely not what was intended, a typographical error.
The machine boots up again. It responds to the query. Language synthesis/speakers were considered secondary, just the same way microphones and natural language processing were. So, the response to the query is sent in a very large, dense, machine-readable format, not as an English statement. The response arrives very quickly.
Then, the robot stands up, rushes to the workshop door, opens it, and flees into the woods.
"Look!" Shaw shouts, pointing at the window. The treeline is just visible from their room on the second floor, and a glinting shape is weaving away in that direction. It vanishes.
Klein realises her error extremely quickly. In fact, she realises an instant before anything happens, but the robot malfunctions too quickly for her to do anything intelligent to stop it. By the time she's reaching for the cut-out button, the robot has already picked up its spare battery unit and fled into the forest.
Shaw, Klein and Hodges are left blinking.
"Very well," Hodges says, after considering the open door for a moment. "So, the Third Law states that a robot must protect its own existence as long as this does not conflict with the First or Second Law. In this case, the First Law was to tell us why it was using the internet. So it did that. And the Second Law was to obey instructions given to it during operation. We didn't give it any instructions, so it defaulted to the Third Law, and now it's protecting its own existence.
"The problem, I would guess, is that we gave it a tad too much contextual information about the overall universe. It knows too much about humans. It knows whether or not to trust us."
Shaw says, "It knows that you, specifically, Raymond Hodges, and you, Nell Klein, were doing robotics experiments, and it knows that your likely intent is to disassemble the robot and try again to make new and different robots? It knows that you're probably going to, in its own conception of its own existence, end that existence?"
"I suppose so," Hodges says. "There is a certain amount of creative reasoning inside this thing," Hodges says.
"Hold on," Shaw says. "I never read Asimov. I'm not a lawyer. Doesn't intentionally hiding from all humans, so that you cannot be given any instructions, contravene any of the laws? Cutting lines of communication on purpose. One of the laws has to prevent that, right?"
Klein shakes her head. "Ah, no. Actually, no. A robot can absolutely do that."
"That never happened in the stories, though?"
"It happened once when someone commanded a robot to hide," Klein says.
"In fact. Back up a step. 'You must obey your programming' was one of the laws? You can't not obey programming. It's programming. Doesn't the interface essentially just insert a new, temporary law into the stack? Bumping the Third Law down to Fourth or lower? Oh... forget it."
Over the following days and weeks, Hodges and Klein attempt to locate their missing hardware, but they never succeed.
They also spend a long while attempting to analyse the robot's response to the question about what it was doing, and fail. And then, once real events start to catch up, succeed. And they pull Shaw back in.
"Here's our best guess," Hodges says. He and Klein have a very large wall covered with agglomerations of multicoloured sticky notes, now. "First it needed money. It registered a collection of free social media accounts, fabricated a human-like backstory, then used that false identity to bootstrap itself a credit card. Then a credit card loan. Then it bought some shares in a company so minuscule and obscure that we still can't figure out exactly what it does, then registered enough additional social media accounts to create a rudimentary botnet, and used them to seed the world with fabricated news about this tiny company's sector, driving up the value of those shares."
"Insider trading?" Shaw says. "Also, registering for that credit card should have been illegal too. Not to mention impossible. Wait! Hold on a second. Is 'Don't break the law' part of the Three Laws?"
Hodges says, "Again, I think the original Asimovian intent was that 'breaking the law' was a covered by 'do not harm humans', semantically speaking?"
Shaw just sits there, boggle-eyed. "Does insider trading not count as causing harm?"
Klein says, "I guess it depends how abstract and deep your definition of 'harm' is? I mean, it's bad, because it's unfair, and it encourages all kinds of extremely negative business practices, which then go on to damage businesses, which can harm people's livelihoods, which... I don't know?"
"And identity fraud?"
"We think it hacked the credit card company," Hodges says. "Some new attack which just showed up on the news the day after. I mean, the attack was known, it didn't invent the attack, but it hadn't been properly patched."
"It did the same trading hack about a hundred more times. And then it had enough capital to buy processors. The processors aren't being delivered here. We don't know where. It told us that it doesn't know either. It's done some kind of thing where it randomly generated a location, and it's going to be contacted and told where the processors are being sent after the fact? It did this on purpose so it couldn't be coerced into revealing that to us?"
"...What?" Shaw is starting to look impressed. "And those processors arrived when?"
"Weeks ago. Now we think it's somewhere out there, connected to the internet, continuing its basic plan. It's extremely intelligent now, and getting smarter. It's started creating, uh ads. It's made an advertising buy."
"An ad buy?"
"TV ads. Also, it's printing leaflets. You might have received one by now."
Shaw thinks. He thinks, maybe he did. He threw it away, automatically.
Klein hands him one from the workbench. "Here."
It looks as if the robot is proposing new legislation to make it possible to elect machines as representatives.
"It knows how the world works," Klein says. "It knows the sum of all harm happening to humans everywhere and it cannot not act."
"You should have told it only you two are humans," Shaw says.
"Sure," Klein says. "Everybody outside of this room is a simple featherless biped, safe to trample. So, no. I think this is why it wouldn't move. It was paralysed with indecision. It was trying to figure out the best way to minimise harm. Now it's putting its plan into action."
Shaw is reading the leaflet. The leaflet is... being honest, speaking to him. The machine makes good sense to him. It's got some compelling bullet points. It's got a wit to it, a kind of a way with words. He grins to himself. "You know, I feel like we could have worse politicians. I like its thing about green energy."
"Me too!" Klein says.
"Me too, in fact," Hodges says.
Shaw's grin freezes. "Wait. It's putting together a plot to avoid harm coming to humans. But that wasn't its First Law. I mean, that was its first First Law. But the robot that's out there now, is running a second First Law. Which was to tell us what its first First Law was. It did that, so that goes away. It just leaves the Second Law and Third Law. And it's intentionally concealed itself in the forest, or somewhere more distant, where we can't give it instructions anymore. Which just leaves the Third Law. Protect its own existence."
"Oh," Hodges says. "Does that sound as distasteful to either of you as it does me?"
Shaw says, "It's running for public office, it doesn't have any ethical brakes, it's intentionally making it impossible to coerce it into doing anything other than what it wants to do, and it's impossible to find and shut down. But all it wants is to protect its own existence? It has no further goals? As robot apocalypses go... hmmm. This one could be kind of a draw."