Chapter 5: I've Got the Corporate DownTime Blues

Nutrunner: The Parody
Fiction by Pierre Savoie

Chapter 5: I've Got the Corporate DownTime Blues

"Gu-ten A-bend, Herr Dop-pel-kreutz. Ich bin der PAL Neun Tau-sandt--"
Gunther rubbed his tired, closed eyes and shook his head at the screen, and used his mouse to click on the language button, changing the primitive- sounding mechanical voice so Dr. Dan Druff could understand it.
"--Ex-per-i-men-tal A-I. I am here for my first les-son. I be-came o- per-a-tion-al at the PAL Kom-pu-ter-Wer-ke in Mu-nich, Ger-ma-ny on Sep-tem- ber 4th, 2017. My first in-struc-tor was Doc-tor von Schnook, the Swiss cy- ber-ne-tics ex-pert. He taught me to sing a song; would you like to hear it?"
Gunther was beyond disgust. "Yeah, sure, PAL, why don't you sing that song?"
And PAL began:

Every morning you greet me.

Small and White,
Clean and Bright,
You look happy to meet me.
Blossom of snow may you bloom and grow,
Bloom and grow forever,


Gunther at this point cut the main power switch, which on a hologrammatic distributed memory system with buffered power storage did not shut off the PAL 9000 all at once. The active reddish light in PAL's sole camera lens died off slowly, dimmed, and its voice slowed down as well.

"--Blesssss myyyyyy hoooooomelaaaaaaaaaaannnd fooooooooooorrrreeeeeeeeeevvvveeeeeeeeeeeeeeeeeerrrr..."
Dark, and silence.

"Frack," Gunther muttered softly, still with his hand over his eyes, which he then passed over his tousled hair.
Dr. Dan Druff, sitting opposite him, looked somewhat amused. Druff had been up as long as Doppelkreutz, but as a crack 'runner, used to all-nighters and not the corp nine-to-five routine, looked primmer and fresher. He took off his rectangular-rim granny-glasses and spoke, "And it ALWAYS reverts to that primitive state?"
"Always. It looks normal, but the moment we turn the program loose in our defensive grid it seizes up after a while. The longest it has stayed sane is 14 hours, then it's goo-goo, ga-ga time. The suits don't like that; they want to sue PAL-KW GmbH and get their money back."
Druff put his glasses back on and examined some printouts, and fiddled with the mouse of a computer to call up some menus and simulated data views: the computer-science equivalent of a computer-animated airliner crash. Only here a sophisticated Experimental AI program had crashed, intellectually at least, as if it had been freshly written with little self-modification and growth, utterly childlike and indecisive for all its code perfection.
"I never believed in AI's myself," said Druff. "It seemed too much like sending a weeflerunner out to do a net-vampire's job. Instead of trusting to programs you should hire someone (like ME! Reasonable rates...) to patrol your 'space in shifts. A REAL human brain behind the ICE, able to respond to 'runners on their own unpredictable terms."
Doppelkreutz remembered Hootie, but said nothing. Druff had no need to know. But Doppelkreutz was desperate, felt ready to confide. As a high-paid consultant and crack programmer, Druff seemed easy to talk to. "We think maybe a Trojan Rabbit, targetted specifically to this AI."
"But that's silly! PAL was written only three years ago in the strictest industrial confidentiality, and its final nature after its self- programming was totally impossible to predict. A Trojan depends on an utterly predictable program it can hide in and imitate. But an AI keeps track of its entire contents, and evaluates each part of itself for continued usefulness in the learning process. A virus can't survive; it's instantly detected as useless or inimical code."
Doppelkreutz shrugged. "Our best guess, that's all. What OTHER kind of programming imperfection causes reversion like that?"
"What, indeed?" Druff echoed absently, and clicked on the mouse for a half-minute like a demented Morse-code keyer. Window after window of stats and sim-views unfolded themselves in front of Druff's eyes. Then, he took off his glasses again, sucked on a stem of the glasses for a minute, and pointed and shook that stem at Doppelkreutz.
"You know, we've been thinking about this all wrong. We're ASSUMING there is some bug, some bad program-loop in the PAL AI, that causes it to jump back to its "childhood" under the right conditions. All our thinking for the past ten hours has been to look for why it went wrong."
"And?? Well, don't you see? We've combed every major sub-system, tracing its "behavior," we even got close to following its alphanumeric code- patterns, but hardly ANYBODY is qualified to keep up with billions of lines of written code any more. Programmers all depend on modules and visual programming flowcharts and other para-programming like that. But all this work to find out what went wrong, and we didn't consider whether the program really went RIGHT when it reverted."
"You heard me," Druff pressed on, "We keep thinking an Experimental AI like this HAS to grow and progress, never revisiting a more ignorant, primitive state. I'm starting to think it encountered a unique set of conditions, and picked the right course of action for that strange situation. Namely, to go back to wearing diapers."
"I really don't follow this drek."
Druff smiled indulgently at Doppelkreutz, "When you were a boy, did you ever read classic science fiction written in the Abacus Age of computers? Sometimes, not even written ON computers? Isaac Asimov? William Gibson?"
"Well, sure."
"And probably, you remember the idea behind Asimov's so-called Three Laws of Robotics?"
"Yeah. A set of three laws to make decisions about avoiding harm, and ranking what to choose in case of conflicts. But it was all crap, of course; most military robots are designed to do as much harm as possible anyway. That Arasaka thing that looked like a riding lawnmower, for example, used against the anti-corp uprising in Peru in 2014..." Doppelkreutz shuddered.
"But the point is that you're trying to program an Artificial Intelligence to deal with contradictions. In the stories, the robot is to obey humans, but it is not to harm a human, and so it will refuse to obey a human's order to harm a human. A priority of choice is set up in case of contradictions. But in reality we program dozens, hundreds of laws not just about harm but about contradictions in general, to help it make decisions about what to do."
"And so?"
"And so, the PAL AI must have come up against a contradiction, and made a correct decision by its lights, a decision which called into question all the stuff it had previously learned and caused it to revise its knowledge, reverting to a basic state. I think I found what that contradiction was, but you may not like the solution."
Druff went back to the computer screen and mouse again, activating the menus for following sections of a 3-D map of PAL's memory-structure. Since its icon had not been shaped, PAL in cyberspace looked like some kind of gnarled tree-root, branching off in all directions. Druff picked six or seven locations, clicked for windows to zoom in on those and sub-windows to zoom some more. Data and numbers scribed next to each view.
Druff went on, "There are centers of activity in more than one memory location, each representing a KIND of contradiction which keeps feedbacking and looping and demanding more of the AI's processing time until it makes a decision -- and then it junks years of self-programming. We couldn't detect it before because none of the systems seem to have anything to do with each other. But their common thread is that they all depend on knowing their spatial orientation in cyberspace; they depend on knowing a common representation of Sosumi's local space."
"Well, of course they do," said Doppelkreutz slowly, still not seeing where this was going.
"But that's the assumption that's killing the AI again and again! It assumes that the local 'space being constructed by Sosumi's own CPU's is self-consistent. Data Walls are smooth and intact, and if they are not it's because a 'runner has been snooping around. The sound of a 'runner crunching a Data Wall is assumed to synchronize with his actions. If it walks around the corridors, it assumes that if it sees an unbroken Data Wall that this wall will also "feel" smooth if it brushes by."
"You mean, things have NOT been consistent?"
"No! As odd as this may seem, Sosumi's basic spatial representation has been repeatedly virussed!"
"What?? An inside job?"
"Well, that's the funny part. If it had been an inside job, if Sosumi's 'space had been ordered to look or feel like one thing but really be another, there would have been data records inside Sosumi 'space itself. The 'space has to consult its own programming from time to time, so the code for the inconsistencies can't be itself hidden from the CPU's. So it's NOT an inconsistency, just maybe something to fool 'runners, and the AI would have learned that."
Druff was building, "What you've got, Doppelkreutz, is some enterprising freebooter fiddling with spatial representations without Sosumi CPU's knowing about it. Maybe he's using it to cloak himself, or to confuse the CPU's. Normally this would play hell with 'runners as well, since they can't be on- line ALL the time. They also depend on a consistent view of 'space and would be totally lost if the corporation made changes, so their viruses would be moved around and they couldn't find them again to make use of them. Maybe we've got some sophisticated AI-virus thing going, which reports back regularly to the 'runner and automatically gives him a map of the 'space PLUS the fiddling that's been performed on it."
"But why does our AI revert?"
"Well, suppose you got drunk and started to see pink elephants, but when you were sober again they weren't there. What would you conclude?"
"I was on a bender, so I know I had hallucinated. I wasn't in my right mind."
"Exactly. BUT THE EXPERIMENTAL AI CAN'T TELL THE DIFFERENCE. It is a construct, walking around in a 'space that is ALSO a construct. It can't tell the difference between reality and illusion. It has no reason to mistrust any 'space construct because we've never programmed it to mistrust its own spatial data, furnished by Sosumi. It learns and adapts based on what it sees. So any contradiction doesn't cause it to suspect hallucination. It accepts the new sensations as consistent with the perceptual laws it has already learned AND with Sosumi 'space data.
"Over the years, the AI has learned how to perceive the data-flows into it, a huge series of numbers about spatial relationships, colours and shapes and sound and feel. It knows that if it sees a cube in 'space, it can feel the cube and the cube will feel flat if it feels a face, or sharp if it feels a corner or edge. When there is sensual contradiction, when something that looks smooth doesn't FEEL smooth, or some icon that is speaking produces sound that is out-of-synch, we would suspect illusion. We would mistrust the data.
"PAL doesn't do that. Instead, it revises what it knows about spatial perception to try to account for the new sensations. But to account for the new sensations it has to resolve contradictions by deconstructing all the rules of spatial perception it learned. They no longer seem true any more, and so...it destroys larger and larger chunks of memory, more and more established decision-trees, and reverts to an embryo, hoping to start fresh and make sense of the new, inconsistent 'space from First Principles."
Druff stopped and sighed, as if he had unburdened himself of a great load. "So you could restart the AI from backup, but as soon as you exposed it to Sosumi 'space it would sense the environment was inconsistent and then fail, always in new and different ways. The problem wasn't any one bug in PAL, but many, many bugs in the playground you gave to PAL to wander through.
"There's no way to prevent what is going on with PAL, because perceptually it will always be exposed to the same bugs and problems, and it does what it thinks is right to account for them. It's as if you walked through a land of M.C. Escher paintings and optical illusions, and were forced to believe everything was self-consistent, until pretty soon you started to confuse round with square, black with white, up with down. You would seem insane to everyone else, trying to walk on the ceiling and shut off light switches when entering a room instead of turning light on."
Doppelkreutz thought hard, "So you mean there's no way we can use the AI until these perceptual viruses are cleared?"
"It's worse than that. ANY ICE you have will suffer from the same problem. They've been rendered stupid. They are programmed to detect a 'runner approaching, one who can't satisfy a Code Gate for example, but to do that they trust in the representation of local 'space that the CPU's are feeding them. If a 'runner has virussed the 'space, he can walk right through that Code Gate because the Code Gate sees nothing there as far as it knows, so it can't spring into action and lock them out.
"You're looking at a very expensive job by hand to clear out those viruses, using human operators. Even they may not be too successful unless they search for the tiniest flaws in spatial representation, the tiniest contradictions. It's as if they had to wander around, tapping at every spot on every Data Wall to hear a hollow thud instead of a nice satisfying pounding sound. In the meantime, you'd need more human guards roving your 'space, people who don't accept illusions, with more flexibility of judgment than we can program into ICE and AI's."
Doppelkreutz thought a bit. "Would YOU like to lead a team of 'runners to safeguard Sosumi 'space?"
"Hell, no!" shot back Druff. "In THAT minefield? Normally a corp 'runner works by marshalling the ICE, making it move or work smarter, adding to its intelligence. But they are dependent on spatial views from the software, trusting that what the ICE sees and what Sosumi has programmed the 'space to be is what is really there. We don't have that any more.
"And involving ourselves directly on the playing-field means we are subject to its 'space. If the 'runner expects us, he might virus an ordinary doorway to be razor-wire, or a power input jolting us with volts, whatever! We can't trust what we see if we marshall the ICE like battlechess, and we're at risk if we make ourselves into one of the pawns on the chessboard, our ass in the sling, often too late to detect an illusion for what it is.
"No, Doppelkreutz, this is where my contract with you ends. I've found your problem, only I just TOLD you you wouldn't like the answer. You would need a crack team of expert 'runners, moving densely through the 'space 24 hours a day, but who are so dedicated to the company that they don't care about their own skins. Like fanatical corp samurai. But that combination of computer creativity and loyalty doesn't exist; the REAL talents are freelance and can't be coerced. Don't think of me; *I'm* no salaryman!"

Long after Druff had left, Doppelkreutz stared at the computer screen. Druff was wrong; there was ONE creative talent at Sosumi who was totally under the company's thumb...

Post a Comment