Ed Ferman of F amp; SF and Barry Malzberg, one of the brightest of the new generation of science fiction writers, had it in mind in early 1973 to prepare an anthology in which a number of different science fiction themes were carried to their ultimate conclusion. For each story they tapped some writer who was associated with a particular theme, and for a story on the subject of robotics, they wanted me, naturally.
I tried to beg off with my usual excuses concerning the state of my schedule, but they said if I didn't do it there would be no story on robotics at all, because they wouldn't ask anyone else. That shamed me into agreeing to do it.
I then had to think up a way of reaching an ultimate conclusion. There had always been one aspect of the robot theme I had never had the courage to write, although the late John Campbell and I had sometimes discussed it.
In the first two Laws of Robotics, you see, the expression "human being" is used, and the assumption is that a robot can recognize a human being when he sees one. But what is a human being? Or, as the Psalmist asks of God, "What is man that thou art mindful of him?"
Surely, if there's any doubt as to the definition of man, the Laws of Robotics don't necessarily hold. So I wrote THAT THOU ART MINDFUL OF HIM, and Ed and Barry were very happy with it-and so was I. It not only appeared in the anthology, which was entitled Final Stage, but was also published in the May 1974 issue of F amp; SF.
That Thou Art Mindful of Him
The Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
1.
Keith Harriman, who had for twelve years now been Director of Research at United States Robots and Mechanical Men Corporation, found that he was not at all certain whether he was doing right. The tip of his tongue passed over his plump but rather pale lips and it seemed to him that the holographic image of the great Susan Calvin, which stared unsmilingly down upon him, had never looked so grim before.
Usually he blanked out that image of the greatest roboticist in history because she unnerved him. (He tried thinking of the image as "it" but never quite succeeded.) This time he didn't quite dare to and her long-dead gaze bored into the side of his face.
It was a dreadful and demeaning step he would have to take. Opposite him was George Ten, calm and unaffected either by
Harriman's patent uneasiness or by the image of the patron saint of robotics glowing in its niche above.
Harriman said, "We haven't had a chance to talk this out, really, George. You haven't been with us that long and I haven't had a good chance to be alone with you. But now I would like to discuss the matter in some detail."
"I am perfectly willing to do that, " said George. "In my stay at U. S. Robots, I have gathered the crisis has something to do with the Three Laws."
"Yes. You know the Three Laws, of course."
"I do."
"Yes, I'm sure you do. But let us dig even deeper and consider the truly basic problem. In two centuries of, if I may say so, considerable success, U. S. Robots has never managed to persuade human beings to accept robots. We have placed robots only where work is required that human beings cannot do, or in environments that human beings find unacceptably dangerous. Robots have worked mainly in space and that has limited what we have been able to do."
"Surely," said George Ten, "that represents a broad limit, and one within which U. S. Robots can prosper."
"No, for two reasons. In the first place, the boundaries set for us inevitably contract. As the Moon colony, for instance, grows more sophisticated, its demand for robots decreases and we expect that, within the next few years, robots will be banned on the Moon. This will be repeated on every world colonized by mankind. Secondly, true prosperity is impossible without robots on Earth. We at U. S. Robots firmly believe that human beings need robots and must learn to live with their mechanical analogues if progress is to be maintained."
"Do they not? Mr. Harriman, you have on your desk a computer input which, I understand, is connected with the organization's Multivac. A computer is a kind of sessile robot; a robot brain not attached to a body-"
"True, but that also is limited. The computers used by mankind have been steadily specialized in order to avoid too humanlike an intelligence. A century ago we were well on the way to artificial intelligence of the most unlimited type through the use of great computers we called Machines. Those Machines limited their action of their own accord. Once they had solved the ecological problems that had threatened human society, they phased themselves out. Their own continued existence would, they reasoned, have placed them in the role of a crutch to mankind and, since they felt this would harm human beings, they condemned themselves by the First Law."
"And were they not correct to do so?"
"In my opinion, no. By their action, they reinforced mankind's Frankenstein complex; its gut fears that any artificial man they created would turn upon its creator. Men fear that robots may replace human beings."
"Do you not fear that yourself?"
"I know better. As long as the Three Laws of Robotics exist, they cannot. They can serve as partners of mankind; they can share in the great struggle to understand and wisely direct the laws of nature so that together they can do more than mankind can possibly do alone; but always in such a way that robots serve human beings."
"But if the Three Laws have shown themselves, over the course of two centuries, to keep robots within bounds, what is the source of the distrust of human beings for robots?"
"Well"-and Harriman's graying hair tufted as he scratched his head vigorously-"mostly superstition, of course. Unfortunately, there are also some complexities involved that anti-robot agitators seize upon."
"Involving the Three Laws?"
"Yes. The Second Law in particular. There's no problem in the Third Law, you see. It is universal. Robots must always sacrifice themselves for human beings, any human beings."
"Of course," said George Ten.
"The First Law is perhaps less satisfactory, since it is always possible to imagine a condition in which a robot must perform either Action A or Action B, the two being mutually exclusive, and where either action results in harm to human beings. The robot must therefore quickly select which action results in the least harm. To work out the positronic paths of the robot brain in such a way as to make that selection possible is not easy. If Action A results in harm to a talented young artist and B results in equivalent harm to five elderly people of no particular worth, which action should be chosen."
"Action A, " said George Ten. "Harm to one is less than harm to five."
"Yes, so robots have always been designed to decide. To expect robots to make judgments of fine points such as talent, intelligence, the general usefulness to society, has always seemed impractical. That would delay decision to the point where the robot is effectively immobilized. So we go by numbers. Fortunately, we might expect crises in which robots must make such decisions to be few…But then that brings us to the Second Law."