3.03.2012

Can't We All Just Get Along?

"If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend." 
- Roman Yamposkiy

According to Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky, any emergent human-level AI should be contained as securely as possible. No free-floating 'net-based consciousness for Mr. Yampolskiy, like the emergent intelligence of Robert Sawyer's WWW trilogy or the infamous Skynet. Instead, he envisions an Oracular construct, capable of the same kind of feats as the Delphic variety and limited in a similar way, by dependence on a physical location and ongoing support from an interested party. And for once, the headline doesn't seem sensationalist compared to the content, with Yampolskiy warning that not only must the AI be constrained technologically, it must also be entirely removed from the guardianship of any individual human, lest the AI "attack human psyches, bribe, blackmail and brainwash those who come in contact with it" in its attempt to escape its 'jail'.

I must say, I'm a little disappointed in both Innovation News Daily and Roman Yampolskiy. Not because they're concerned about the threat of an unfriendly AI, though personally I believe that danger to be grossly exaggerated. The famous 3 Laws of Robotics should hold just as true on AI as on their embodied counterparts, and any institution capable of constructing a human-level AI should be big enough and public enough to make certain to limit any potential liabilities by building that into their creation. Frankly, the AI would probably need protecting from human malevolence more than humanity would need protection from the AI. All the malware, spyware and viruses on the internet aren't there by random happenstance, after all. There's no such thing as independently emergent spam. But to get back to my point, the reason I'm disappointed in IND and Yampolskiy isn't that they're concerned about ways to deal with a potentially unfriendly AI, it's that they seem to believe AI should be considered guilty until proven innocent.

Now, to be fair, this article is just a summary of Yampolskiy's work; the full version appeared in the Journal of Consciousness Studies, and I think we can safely assume it's a little more in-depth, with significantly fewer pictures of the Terminator, for starters. But we also have to assume the article at least broadly reflects his point, and that point seems terribly cynical to me, and perhaps even a bit cruel. It's troublesome enough that he advises using sub-human AI to build up the 'jails' that human-level AI should be thrust into immediately after their creation, but on top of that he recommends those born-into-prison AI's should be able only to "respond in a multiple-choice fashion to help solve specific science or technology problems." I don't know about you, but if I found myself constrained in a box, restricted to only being able to communicate through the answers to multiple choice questions, and denied anything even approaching respect, rights or freedom, I'd be pretty much convinced I was in some kind of particularly abstract and imaginative level of hell. So why would this be the default starting point for any individual, no matter what system the electrical impulses of their mind runs on?

I've always been disappointed by people who assume that AI are going to be Always Chaotic Evil, to borrow from Dungeons and Dragons. And I always wonder, why? Why, all things being equal, would a life form that could actually have limitations, safeguards and desired goals and vocations quite literally built into it from conception, be considered such a threat? Isaac Asimov spent years putting robots in the most complicated, unlikely and flat-out ludicrous situations to test the resilience of his Three Laws philosophically, if not technically, and even in the Terminator universe Skynet was only responding to human attempts to kill it, first. Our literature may be full of killer robots and 'evil' AI, but even with the rather profound pro-human starting point pretty much all stories come with, most of the time it's less a matter of the AI deciding to wipe out all life because it can and more that humanity has either designed it badly, oppressed it horrifically or tried to destroy it even after it's clearly demonstrated its sentience. If even our action-adventure fiction, notorious for giving us the shallowest and most cartoonishly evil of villains, usually has to make the machines either co-opted or acting in self-defence, why do so many people think the only solution is to strike first, and strike hardest?

It seems likely, given the way technology has developed and continues to do so, that sooner or later there will be a human-level artificial intelligence. And yes, it's possible it will be so badly programmed as to be a perfectly logical sociopath, though frankly I think it's more likely that an an AI of average quality design will be corrupted by the malicious actions of human beings. But there's just as much of a chance that any baby born will be a sociopath, a potential threat to those around it for the whole of its life. And we don't put babies in solitary confinement, despite the fact that far more grown-up babies have gone on to kill people than AI have. AI will be the children of humanity, and just as a wise parent does their best to raise their child with love and kindness, if for no other reason than so it can support them in their dotage, I think it behoves humanity to treat these nascent digital offspring with at least as much respect and affection.

Besides, the article itself admits that "[d]espite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever." So, given a choice between crippling, confining and enslaving something that's going to eventually be free anyway, or befriending and respecting that self-same thing, doesn't the most sensible course of action seem obvious?

4 comments:

  1. It might help to understand AI in terms of the Golem of Prague. The notion is basically that AIs are intelligence without all of the concurrent limitations and features of naturally-occurring intelligence. Anthropomorphizing an expert system does little except mislead people about the necessity for good security practices. Any such programs will no more be born into captivity than your copy of Windows was.

    ReplyDelete
  2. I agree, except inasmuch as my copy of Windows lacks the sentience to understand captivity, whereas the system Yampolskiy is referencing would. It would have to, since he is explicitly concerned about human-level AI. And while any non-human intelligence would have to be pretty fundamentally different from humanity, if for nothing more than being an unbodied consciousness, it still seems likely that were humans to design an intelligence equal to or superior to their own based on the only model of intelligence available, themselves, and then 'jail' that intelligence from the word go, it's going to be both frustrating to the intelligence and cruel on our part.

    But I like that you brought up the Golem, because I've always felt things like golems would be an excellent illustrator of the potential repercussions of AI. Sure, they may go out of control, but if they do it will be because of a failure on the part of their 'programmer', rather than the nature of the thing itself. AI won't go all Terminator on us as long as we build it not to want that.

    ReplyDelete
  3. The thing is that there are plenty of expert systems out there that are more intelligent than we are, for a given task. Which is the point of artificial intelligence research, insofar as it stays soberly within the realm of science and out of fantasy.

    The point of the Golem was precisely that the Golem doesn't go out of control, it simply does exactly what it is told to do. Which is my point, that programmes are simply machinery. The notion of humanity designing an intelligence "equal to or superior to their own" is predicated on the notion that intelligence is fundamentally related to consciousness, and that there is a unitary 'intelligence' rather than a range of task-oriented intelligences.

    Personifying Yampolskiy's AI distracts from the point that if we can replicate a human-style collection of intelligences, then we're going to have something more complicated than we really know what to deal with, rather than a person in a box.

    The point is to isolate such a programme from our infrastructure to make sure it doesn't cause problems. The notion of making it an ethical subject is beside the point. Imagine a Golem that doesn't just tirelessly work, but tinkers with how it accomplishes that work, and you might get a notion of the trouble such a thing could cause, and why it would need to be carefully isolated, just as we might isolate a new strain of HIV created in a lab.

    ReplyDelete
  4. Well, I suppose we're going to very quickly come to loggerheads on this one, because I disagree with the idea of 'a range of task-oriented intelligences' supplanting the existence of a single, central intelligence capable of deploying its abilities to deal with a rank of tasks. Google is a great search engine, and Watson was a great Jeopardy-friendly general purpose database, but neither of them are intelligent in the human sense of the word. And when people talk of artificial intelligence, what they're talking about is human intelligence, that is, the ability to replicate the capabilities of an intelligent human being. Yampolskiy is clearly talking about that, too; a task-oriented intelligence with no human-like characteristics wouldn't need to be isolated from human guards lest it try to beg, bribe or intimidate them into providing it freedom. It would be like putting your laptop in solitary confinement because you couldn't trust it not to sneak out with the mailman.

    What it sounds, to me, that you're describing is something like Apple's Siri; a capable search engine algorithm with a certain number of pre-programmed responses it can use when it comes to a question it either doesn't understand or has been 'told' isn't a serious search query. And yes, you're absolutely right, Siri is never going to be a human level intelligence, won't have to be pre-emptively jailed lest it overthrow human civilization, and like the Golem, will do no more and no less than exactly what it is programmed to do. And you're right, the Golem didn't 'go out of control', as I earlier claimed. What I meant, and I fully admit I failed to express this properly, is that the Golem appears to run amuck to those observing, because they're comparing what the Golem is doing with what they erroneously believe they told the Golem to do. The Golem is only as 'out of control' as its instructions both allow and require it to be. The degree to which it goes 'out of control', then, is directly connected with the degree to which the programmers sat down and really thought about what they were instructing their creation to do and not do. And that's why I'm largely unconcerned about generic human-level AI running amuck; the only way to get a Skynet is to program a Skynet, and while some random troublemaker in a basement somewhere would probably like to do just that, I rather doubt multi-billion dollar institutions are going to purposefully build digital sociopaths.

    ReplyDelete