Nicole calls Albert into the monitor room. She’s watched the same video reel so many times that she feels she has lost her objectivity, and needs a second opinion. She cannot decide whether the robot has assessed the data, and chosen to represent it in a way the boy will understand, or whether it is actually expressing concern about its wellbeing.
Albert enters the room, and sits facing the screen. The twins have an understanding that when an objectivity check is required, it’s generally best simply to present the information, without preamble or explanation, which might introduce bias, and render moot the second opinion.He nods to his sister, and she hits the play button, in response.
The clip is from an observation camera, and the subjects being observed are Oscar Coxcomb, the boy chosen to be the first companion, and Zero, the first robot given the Temporal Impedance Subroutine in its code. Although the project is entirely Nicole’s work, she sees it as a collaboration with her brother, since it was the assessment of his misadventures in solitary confinement which led her to the notion of killing time. The robot and the boy sit facing each other, neither saying a word. Each has the appearance of assessing each other. It is Zero who breaks the silence.
“Why am I?”
“Why are you what?”
“Why am I anything?”
“You have to be something, otherwise you just wouldn’t exist”
“But who’d know if I didn’t exist? I wouldn’t know. So I don’t exist for my benefit”
“I’d miss you”
“So I am because you are”
“Maybe”
Oscar looks the robot up and down. He doesn’t yet have the vocabulary, nor the life experience, to explain existentialism to his new companion. No one was expecting this kind of interaction, but he is nonetheless clearly intrigued by it. It demonstrates a curiosity which is almost child-like. It’s embryonic - It seems alive, but not yet complete and ready for the world.
“When will I die?”
“You’re a robot, robots don’t die”
“So I’ll live for ever?”
“I don’t know. I don’t think so”
“When will you die?”
“I don’t know, no one knows when they’ll die”
“Does that frighten you?”
“Sometimes”
“I think it frightens me too”
Nicole pauses the playback, and looks across at Albert, but says nothing. He sits a while, simply taking in the image of the two companions facing each other. He immediately sees what it was that Nicole wanted him to see, and is trying to decipher from the robot’s face, what is going on in its mind. A futile and pointless endeavour, since the face remains permanently blank, but Albert cannot help projecting onto the stoic visage nonetheless.
“She doesn’t look frightened”
“Even if she did, we’d have the same problem”
“True enough. Did the code give any clue?”
“I’m still sorting through it. She’s made a lot of changes, and some of it is in a new character set that I didn’t program, so I have to figure out the purpose of that first”
“So at this point there’s no way to know for certain? How long was the pause?”
“Just over two seconds”
“That’s a lot of thinking”
“The code she processed during it would definitely back that up”
“So it wasn’t just a simulated conversational pause?”
“No, there was definitely more to it”
“So what have you been able to discern from the new code?”
“I can’t really find a context for it. My best guess is that it refers to a range of frequencies, but it doesn’t use any of the generally accepted parameters for measuring them”
“That’s something at least. So I guess what you really brought me in for was ask if I think she’s frightened?”
“And do you?”
“Honestly I don’t know. We’ve seen them fake responses before, contextualise things for humans. Could just be that, but then why the new code?”
“Exactly. I’ve sent it to Henry, see what he can make of it”
“Let me know what he says”
“He’s calling me in five minutes - want to stick around?”