“I have come across her name in a few old books. She was one of the early pioneers in robotics.”
“Is that all you know of her?”
Baley made a gesture of dismissal. “I suppose I could find out more if I searched the records, but I have had no occasion to do so.”
“How strange,” said Fastolfe. “She’s a demigod to all Spacers, so much so that I imagine that few Spacers who are not actually roboticists think of her as an Earthwoman. It would seem a profanation. They would refuse to believe it if they were told that she died after having lived scarcely more than a hundred metric years. And yet you know her only as an early pioneer.”
“Has she got something to do with all this, Dr. Fastolfe?”
“Not directly, but in a way. You must understand that numerous legends cluster about her name. Most of them are undoubtedly untrue, but they cling to her, nonetheless. One of the most famous legends—and one of the least likely to be true — concerns a robot manufactured in those primitive days that, through some accident on the production lines, turned out to have telepathic abilities—”
“What!”
“A legend! I told you it was a legend—and undoubtedly untrue! Mind you, there is some theoretical reason for supposing this might be possible, though no one has ever presented a plausible design, that could even begin to incorporate such an ability. That it could have appeared in positronic brains as crude and simple as those in the prehyperspatial era is totally unthinkable. That is why we are quite certain that this particular tale is an invention. But let me go on anyway, for it points out a moral.
“By all means, go on.”
“The robot, according to the tale, could read minds. And when asked questions, he read the questioner’s mind and told the questioner what he wanted to hear. Now the First Law of Robotics states quite clearly that a robot may not injure a human being or, through inaction, allow a person to come to harm, but to robots generally that means physical harm. A robot who can read minds, however, would surely decide that disappointment or anger or any violent emotion would make the human being feeling those emotions unhappy and, the robot would interpret the inspiring of such emotions under the heading of ‘harm.’ If, then, a telepathic robot knew that the truth might disappoint or enrage a questioner or cause that person to feel envy or unhappiness, he would tell a pleasing lie, instead. Do you see that?”
“Yes, of course.”
“So the robot lied even to Susan Calvin herself. The lies could not long continue, for different people were told different things that were not only inconsistent among themselves but unsupported by the gathering evidence of reality, you see. Susan Calvin discovered she had been lied to and realized that those lies had led her into a position of considerable embarrassment. What would have disappointed her somewhat to begin with had now, thanks to false hopes, disappointed her unbearably.—You never heard the story?”
“I give you my word.”
“Astonishing! Yet it certainly wasn’t invented on Aurora, for it is equally current on all the worlds.—In any case, Calvin took her revenge. She pointed out to the robot that, whether he told the truth or told a lie, he would equally harm the person with whom he dealt. He could not obey the First Law, whatever action he took. The robot, understanding this, was forced to take refuge in total inaction. If you want to put it colorfully, his positronic pathways burned out. His brain was irrecoverably destroyed. The legend goes on, to say that Calvin’s last word to the destroyed robot was ‘Liar!’”
Baley said, “And something like this, I take it, was what happened to Jander Panell. He was faced with a contradiction in terms and his brain burned out?”
“It’s what appears to have happened, though that is not as easy to bring about as it would have been in Susan Calvin’s day. Possibly because of the legend, roboticists have always been careful to make it as difficult as possible for contradictions to arise. As the theory of positronic brains has grown more subtle and as the practice of positronic brain design has grown more intricate, increasingly successful systems have been devised to have all situations that might arise resolve into nonequality, so that some action can always be taken that will be interpreted as obeying the First Law.”
“Well, then, you can’t bum out a robot’s brain. Is that what you’re saying? Because if you are, what happened to Jander?”
“It’s not what I’m saying. The increasingly successful systems I speak of are never completely successful. They cannot be. No matter how subtle and intricate a brain might be, there is always some way of setting up a contradiction. That is a fundamental truth of mathematics. It will remain forever impossible to produce a brain so subtle and intricate as to reduce the chance of contradiction to zero. Never quite to zero. However, the systems have been made so close to zero that to bring about a mental freeze-out by setting up a suitable contradiction would require a deep understanding of the particular positionic brain being dealt with—and that would take a clever theoretician.”
“Such as yourself, Dr. Fastolfe?”
“Such as myself. In the case of humaniform robots, only myself.”
“Or no one at all,” said Baley, heavily ironic.
“Or no one at all. Precisely,” said Fastolfe, ignoring the irony. “The humaniform robots have brains—and, I might add, bodies—constructed in conscious imitation of the human being. The positronic brains are extraordinarily delicate and they take on some of the fragility of the human brain, naturally. Just as a human being may have a stroke, though some chance event within the brain—and without the intervention of any external effect, so a humaniform brain might, through chance alone the occasional aimless drifting of positrons—go into mental—”
“Can you prove that, Dr. Fastolfe?”
“I can demonstrate it mathematically, but of those who could follow the mathematics, not all would agree that the reasoning was valid. It involves certain suppositions of my own that do not fit into the accepted modes of thinking in robotics.”
“And how likely is spontaneous mental freeze-out?”
“Given a large number of humaniform robots, say a hundred thousand, there is an even chance that one of them might undergo spontaneous mental freeze-out in an average Auroran lifetime. And yet it could happen much sooner, as it did to Jander, although then the odds would be very greatly against it.”
“But look here, Dr. Fastolfe, even if you were to prove conclusively that a spontaneous mental freeze-out could take place in robots generally, that would not be the same as proving that such a thing happened to Jander in particular at this particular time.”
“No,” admitted Fastolfe, “you are quite right.”
“You, the greatest expert in robotics, cannot prove it in the specific case of Jander.”
“Again, you are quite right.”
“Then what do you expect me to be able to do, when I know I nothing of robotics.”
“There is no need to prove anything. It would surely be sufficient to present an ingenious suggestion that would make spontaneous mental freeze-out plausible to the general public.”
“Such as—”
“I don’t know.”
Baley said harshly. “Are you sure you don’t know, Dr. Fastolfe?”
“What do you mean? I have just said I don’t know.”
“Let me point out something. I assume that Aurorans, generally, know that I have come to the planet for the purpose of tackling this problem. It would be difficult to manage to get me here secretly, considering that I am an Earthman and this is Aurora.”
“Yes, certainly, and I made no attempt to do that. I consulted the Chairman of the Legislature and persuaded him to grant me permission to bring you here. It is how I’ve managed to win a stay in judgment. You are to be given a chance to solve the mystery before I go on trial. I doubt that they’ll give me a very long stay.”