Cold Embrace: The Question of Motive and Character Gallery 04

The most important thing to remember when drafting out the conflict in a story is remembering that “evil for evil’s sake” is very, very rare. If not outright nonexistent. The motives of countries, of ideologies, and certainly of individuals may be corrupt – may be motivated by greed, by selfishness, by cowardice. But they’re all motivated towards doing what appears to be the right thing to do at the time. While evil exists, at least the initial motive of any individual or group is rarely to do wrong. While not everybody is a utilitarian in conduct, most are Kantian in intent.

…my apologies. Been taking an ethics class lately, so please excuse the mild memetic pollution.

Anyhow, the point is that trying to write in a cookie-cutter robber baron villain isn’t going to cut it if the intent is to write a believable story. True psychopathy is few and far between, after all, and even politicians, as twisted and corrupt they need to be to perform their jobs, stop far short of actively seeking to screw over their nation for personal gain. Admittedly, mostly because it doesn’t get them reelected, but it still serves to demonstrate that lust for power usually stems from a desire to repair a faulty or broken system.

Of course, in any story, there has to be conflict. An easy resolution where all parties are in unilateral agreement doesn’t necessarily make a bad story, but it does make it either very short, very boring, or worse – both. It’s also, again, highly unrealistic to expect any controversy of any level to be so easily resolved: if there is one absolute about the human status, it is that we are extraordinarily argumentative. Our diversity of viewpoints lend quite naturally to clash, even between the most well-meaning individuals – and when you enter the realm of abstract priorities and ideologies, who is right or wrong, good or evil, in any specific situation becomes downright murky, if not completely meaningless.

Cold Embrace doesn’t have villains. It has competing goals and desires. Even the killers have, I hope, sympathetic motives, and not every death was a matter of injustice. Rather, if there is any real evil in my attempt to portray a believable future, it is merely the evil found lurking in the blind spots of our own awareness – evil risen out of ignorance, not willful hate.

Sometimes, it is not evil men that commit atrocities, but innocents.

ARTIFICIAL INTELLIGENCE

Transhuman writers often postulate that the only reason why we have sentience and self-awareness, and computers don’t, is that we’ve got a few billion primitive processors working in asynchronous parallel, while our most powerful computers don’t even have a tenth of that quite yet. Neurochemical packet exchange is, after all, inherently less efficient than silicon and electrons, but when you have so many packed into a small space, and a few hundred million years to work out the kinks in the system, those primitive processors still pack a bit of a whallop. While no human being in the world will ever outperform a computer on pure math, our analytical abilities over dimensions, time and item recognition still outperforms our best hardware.

Transhumanists also point out that, at the rate we are developing computer science, this won’t hold true for much longer.

Artificial intelligence will, to my best guess, ultimately be the end-result of a convergence of various technologies. The neural mapping projects, the recent development of the memristor, and the incredible, accelerating pace of Moore’s Law – alone, they won’t push us over the edge, but together with other related technologies, we may very well see the first artificially created intelligence arise within our own lifetimes.

Mind you, it won’t be all that intelligent at first. If we even get a childlike analogue, we can consider ourselves immensely lucky – impossibly so, perhaps. But the thing – the dangerous thing – about artificial intelligence is that it has, within its means, to lend itself tangible self-improvement. Not through the trials and tribulations we human beings have to pit ourselves through in order to eck out the barest advantage over our previous selves… but easily. Like slapping on an additional hard drive, optimizing its internal code, in-depth self-diagnosis. Like being able to design an inherently superior descendant model, wholly without human input. When you apply human creativity onto something capable of thinking at the pace of lightning, human intervention not only becomes redundant, but a liability.

This is obviously not a comforting thought in any means whatsoever. The obsolescence of the entire human species is hard to see in a positive light. At the very best, we can keep up for a decade or two by making digital analogues of our own brains – up until somebody realizes that the only real way to keep up later on is to optimize the “Human” software past the point of recognition.

The obvious question at this point is, if this is such a likely future, should we even bother to further AI research then? The threat it poses to the human condition is completely unprecedented – and the issues of how we should even legally treat artificial intelligences, though a staple of science fiction novels, are even more so. Especially as more and more realize that these brilliant mind-children of ours have nothing in common with us at all – not in the way they think, and not in the way they experience the world. They are not Human, under any definition of the term.

Worse, what if it doesn’t take a multimillion dollar lab and a massive team of experts to code artificial intelligence? What if, once we reach Moravec’s threshold, intelligence turns out to be an emergent property of fast-computing systems? We use weak A.I.s every day, every time we touch a keyboard, and our desire for faster and better has not slaked at all in the last few decades. Convergent techs don’t necessarily need to include neural research and all the fancy and cutting-edge wizmos and doodads we’re developing now – what if all it took was a human interface program, some fancy logic algorithms, and some means of interacting with the world at large?

What legal rights does your laptop have to its pursuit of life, liberty and happiness?

James Channing

Age: 44

Sex: Male

Occupation: Lieutenant – Naval Intelligence (Tokyo Post)

Appearance: A lanky man at 6’1”, seemingly emaciated. Close-cropped, strawberry-blonde hair, and a perpetually stubbled jaw despite his best efforts. Light brown eyes, slightly far-sighted.

Personality: Stoic, relentlessly thorough, humorless

History: The Channing family has had a long history with the US military. Patriotism and service is almost part of their genetic makeup, and James is no exception. He works with nearly machine-like efficiency, wasting no hour of the day in the execution of his tasks. Unfortunately, that’s also why he’s been stuck as a lieutenant for the last eight years – though something of a savant at his profession, his brusque mannerism and social disdain’s left him with few, if any, friends in the service. His expertise in ELINT (electronic intelligence) makes him indispensable, especially given the increase in Asiatic political tensions, but the utter inability to understand the man nor his motives leaves his coworkers feeling a touch queasy in his presence.

Advertisements

~ by Gonzo Mehum on March 7, 2009.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: