* * * * *

"Any messages?" Creek asked his agent when he returned home.

"Three," the agent said, a disembodied voice because Creek had not put on his monitor glasses. "The first is from your mother, who is wondering whether you're planning to come visit her next month like you said you were going to. She's worried about your father's health and she also has a nice young lady she'd like you to meet, who is a doctor of some sort or another. Those are her words."

"My mother was aware that she was speaking to an agent and not me, right?" Creek asked.

"It's difficult to say," the agent said. "She didn't stop talking until she hung up. I was unable to tell her I wasn't you."

Creek grinned. That sounded like Mom. "Second message, please," he said.

"From Ben Javna. He is interested in the state of your investigation.''

"Send him a message that I have news for him and I will call him later this evening or tomorrow. Third message, please."

"You have a message from an IBM server at NOAA. Your software is unpacked, modeled, and integrated. It is awaiting further instructions."

Creek sat down at his keyboard and put on his monitor glasses; the form of his agent was now projected into the middle of his living room. "Give me a window at the IBM, please," he told his agent. The agent opened up the window, which consisted of a shell prompt. Creek typed "diagnostic" and waited while the software checked itself for errors.

"Intelligent agent" is a misnomer. The "intelligence" in question is predicated on the agent's ability to understand what its user wants from it, based on what and how that user speaks or types or gestures. It must be intelligent enough to parse out "urns" and "uhs" and the strange elliptic deviations and tangents that pepper everyday human communication—to understand that humans mangle subject-verb agreement, mispronounce the simplest words, and expect other people to have near-telepathic abilities to know what "You know, that guy that was in that movie where that thing happened and stuff" means.

To a great extent, the more intelligent an agent is, the less intelligent the user has to be to use it. Once an intelligent agent knows what it is you're looking for, retrieving it is not a difficult task—it's a matter of searching through the various public and private databases for which it has permissions. For all practical purposes the retrieval aspect of intelligent agency has remained largely unchanged since the first era of public electronic data retrieval in the late 20th century.

What intelligent agents don't do very well is actually think—the inductive and deductive leaps humans make on a regular basis. The reasons for this are both practical and technical. Practical, in that there is no great market for thinking intelligent agents. People don't want agents to do anything more than what they tell them to do, and see any attempt at programmed initiative as a bug rather than a feature. At the very most, people want their intelligent agents to suggest purchase ideas based on what they've purchased before, which is why nearly all "true intelligence" initiatives are funded by retail conglomerates.

Even then, retailers learned early that shoppers prefer their shopping suggestions not be too truthful. One of the great unwritten chapters of retail intelligence programming featured a "personal shopper" program that all-too-accurately modeled the shoppers' desires and outputted purchase ideas based on what shoppers really wanted as opposed to what they wanted known that they wanted. This resulted in one overcompensatingly masculine test user receiving suggestions for an anal plug and a tribute art book for classic homoerotic artist Tom of Finland, while a female test user in the throes of a nasty divorce received suggestions for a small handgun, a portable bandsaw, and several gallons of an industrial solvent used to reduce organic matter to an easily drainable slurry. After history's first recorded instance of a focus group riot, the personal shopper program was extensively rewritten.

The technical issue regarding true intelligence programming had to do with the largely unacknowledged but nevertheless unavoidable fact that the human intelligence, and its self-referential twin human consciousness, are artifacts of the engine in which they are created: The human brain itself, which remained, to the intense frustration of everyone involved, a maddeningly opaque processor of information. In terms of sheer processing power, the human brain had been outstripped by artificial processors for decades, and yet the human mind remained the gold standard for creativity, initiative, and tangential inductive leaps that allowed the human mind to slice through Gordian knots rather than to attempt to painstakingly and impossibly unknot them.

(Note that this is almost offensively human-centric; other species have brains or brain analogues which allow for the same dizzying-yet-obscure intelligence processes. And indeed, all intelligent species have also run into the same problem as human programmers in modeling artificial intelligence; despite their best and most logical and/or creative efforts, they're all missing the kick inside. This has amused and relieved theologians of all species.)

In the end, however, it was not capability that limited the potential of artificial intelligence, it was hubris. Intelligence programmers almost by definition have a God complex, which means they don't like following anyone else's work, including that of nature. In conversation, intelligence programmers will speak warmly about the giants of the field that have come before them and express reverential awe regarding the evolutionary processes that time and again have spawned intelligence from non-sentience. In their heads, however, they regard the earlier programmers as hacks who went after low-hanging fruit and evolution as the long way of going around things.

They're more or less correct about the first of these, but way off on the second. Their belief about the latter of these, at least, is entirely understandable. An intelligence programmer doesn't have a billion years at his disposal to grow intelligence from the ground up. There was not a boss born yet who would tolerate such a long-term project involving corporate resources.

So intelligence programmers trust in their skills and their own paradigm-smashing sets of intuitive leaps—some of which are actually pretty good—and when no one is looking they steal from the programmers who came before them. And inevitably each is disappointed and frustrated, which is why so many intelligence programmers become embittered, get divorced, and start avoiding people in their later years. The fact of the matter is there is no easy way to true intelligence. It's a consonant variation of Godel's Incompleteness Theorem: You can't model an intelligence from the inside.

Harris Creek had no less hubris than other programmers who worked in the field of intelligence, but he had had the advantage of peaking earlier than most—that Westinghouse science project of his—and thus learning humility at a relatively early age. He also had that advantage of having just enough social skills to have a friend who could point out the obvious-to-an-outside-observer flaw in Creek's attempt to program true intelligence, and to suggest an equally obvious if technically difficult solution. That friend was Brian Tavna; the solution was inside the core data file for which the IBM machine at NOAA had spent a day unpacking and creating a modeling environment.

The solution was stupidly simple, which is why no one bothered with it. It was damn near impossible, using human intelligence, to make a complete model of human intelligence. But if you had enough processing power, memory, and a well-programmed modeling environment, you could model the entire human brain, and by extension, the intelligence created within it. The only real catch is that you have to model the brain down to a remarkable level of detail.


Перейти на страницу:
Изменить размер шрифта: