"If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend."
- Roman Yamposkiy
According to Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky, any emergent human-level AI should be contained as securely as possible. No free-floating 'net-based consciousness for Mr. Yampolskiy, like the emergent intelligence of Robert Sawyer's WWW trilogy or the infamous Skynet. Instead, he envisions an Oracular construct, capable of the same kind of feats as the Delphic variety and limited in a similar way, by dependence on a physical location and ongoing support from an interested party. And for once, the headline doesn't seem sensationalist compared to the content, with Yampolskiy warning that not only must the AI be constrained technologically, it must also be entirely removed from the guardianship of any individual human, lest the AI "attack human psyches, bribe, blackmail and brainwash those who come in contact with it" in its attempt to escape its 'jail'.
I must say, I'm a little disappointed in both Innovation News Daily and Roman Yampolskiy. Not because they're concerned about the threat of an unfriendly AI, though personally I believe that danger to be grossly exaggerated. The famous 3 Laws of Robotics should hold just as true on AI as on their embodied counterparts, and any institution capable of constructing a human-level AI should be big enough and public enough to make certain to limit any potential liabilities by building that into their creation. Frankly, the AI would probably need protecting from human malevolence more than humanity would need protection from the AI. All the malware, spyware and viruses on the internet aren't there by random happenstance, after all. There's no such thing as independently emergent spam. But to get back to my point, the reason I'm disappointed in IND and Yampolskiy isn't that they're concerned about ways to deal with a potentially unfriendly AI, it's that they seem to believe AI should be considered guilty until proven innocent.
Now, to be fair, this article is just a summary of Yampolskiy's work; the full version appeared in the Journal of Consciousness Studies, and I think we can safely assume it's a little more in-depth, with significantly fewer pictures of the Terminator, for starters. But we also have to assume the article at least broadly reflects his point, and that point seems terribly cynical to me, and perhaps even a bit cruel. It's troublesome enough that he advises using sub-human AI to build up the 'jails' that human-level AI should be thrust into immediately after their creation, but on top of that he recommends those born-into-prison AI's should be able only to "respond in a multiple-choice fashion to help solve specific science or technology problems." I don't know about you, but if I found myself constrained in a box, restricted to only being able to communicate through the answers to multiple choice questions, and denied anything even approaching respect, rights or freedom, I'd be pretty much convinced I was in some kind of particularly abstract and imaginative level of hell. So why would this be the default starting point for any individual, no matter what system the electrical impulses of their mind runs on?
I've always been disappointed by people who assume that AI are going to be Always Chaotic Evil, to borrow from Dungeons and Dragons. And I always wonder, why? Why, all things being equal, would a life form that could actually have limitations, safeguards and desired goals and vocations quite literally built into it from conception, be considered such a threat? Isaac Asimov spent years putting robots in the most complicated, unlikely and flat-out ludicrous situations to test the resilience of his Three Laws philosophically, if not technically, and even in the Terminator universe Skynet was only responding to human attempts to kill it, first. Our literature may be full of killer robots and 'evil' AI, but even with the rather profound pro-human starting point pretty much all stories come with, most of the time it's less a matter of the AI deciding to wipe out all life because it can and more that humanity has either designed it badly, oppressed it horrifically or tried to destroy it even after it's clearly demonstrated its sentience. If even our action-adventure fiction, notorious for giving us the shallowest and most cartoonishly evil of villains, usually has to make the machines either co-opted or acting in self-defence, why do so many people think the only solution is to strike first, and strike hardest?
It seems likely, given the way technology has developed and continues to do so, that sooner or later there will be a human-level artificial intelligence. And yes, it's possible it will be so badly programmed as to be a perfectly logical sociopath, though frankly I think it's more likely that an an AI of average quality design will be corrupted by the malicious actions of human beings. But there's just as much of a chance that any baby born will be a sociopath, a potential threat to those around it for the whole of its life. And we don't put babies in solitary confinement, despite the fact that far more grown-up babies have gone on to kill people than AI have. AI will be the children of humanity, and just as a wise parent does their best to raise their child with love and kindness, if for no other reason than so it can support them in their dotage, I think it behoves humanity to treat these nascent digital offspring with at least as much respect and affection.
Besides, the article itself admits that "[d]espite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever." So, given a choice between crippling, confining and enslaving something that's going to eventually be free anyway, or befriending and respecting that self-same thing, doesn't the most sensible course of action seem obvious?