Artificial intelligence. It's a hallmark of science fiction storytelling, and for good reason. Nothing says 'the future' like robots, and if you're going to have robots, then there's not much point in having unintelligent ones. But quite frankly, there's a fundamental issue that most science fiction doesn't really bother to address. Why?
Besides the obvious.
True artificial intelligence is a remarkably difficult proposition, one that's been thwarting some of the most ingenious computer scientists Western society has produced. The greatest success to date has been Watson, the Jeopardy!-playing 'AI' which in fact was more a next-generation Google, an especially-well-built search engine premiered in a particularly well-conceived three-day marketing program. Mapping out every possible outcome in any situation an AI would come into contact with is an impossibly time-intensive proposition, and to date there's been little to no success in heuristic programming, in creating machines that can learn. If we want to get our hands on Rosie or Robbie or our very own Cylons, we're clearly going to need to step our game up.
But do we want those things? Robot maids and construction workers are all well and good as window-dressing for a scifi production, but what is their practical utility? The all-in-one model of service-industry robotics is as passe as silver jumpsuits or wanting dinner in a pill, and for good reason; it's needlessly complicated and offers little benefit for its costs. Going to all the trouble of creating self-aware AI just so it can do construction work or clean up the house is a grotesque waste of the tremendous amount of effort required to rise to that level of programming sophistication, particularly when most of the discrete tasks those all-in-one models are pictured doing can already be done, either by the devices people already have in their homes or by only slightly upgraded devices.
But if there's no meaningful economic benefit to individual sentient robot-ownership, or even large-scale industrial sentient robot ownership, is there any need at all for AI? Beyond the sort of 'because it's there' motivation that tinkerers and inventors have always possessed, it's hard to see much to be gained by pursuing fully humanoid synthetic consciousness, particularly given our species' rather terrible track record of treating servants humanely. The odd slave revolt was bad enough, but imagine what would've happened if, rather than just being out in the fields picking cotton, American slaves had been fully integrated into the water and power grids, given control over military hardware and distributed through the entirety of the communications grid? I don't necessarily think that a Robot Rebellion is a foregone conclusion, but given the shoddy construction techniques mass-produced AI would be subject to, and our history of utterly abysmal behaviour towards anything even slightly weaker than we are, I also don't think it's the impossibility that Asimov did. And that doesn't even get into the dangers of, say, a foreign power trying to hack 'our' AI to use them as weapons.
Behold, our doom!
So, if they're not economical and they're a constant potential danger on top, what's the point of pursuing AI at a societal level? Will we ever have U.S. Robotics making us metal friends and helpmates, or is the future of true AI that of Noonien Soong, toiling in obscurity to create a single life?
No comments:
Post a Comment