Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

1.30.2016

What We Talk About When We Talk About Sexy Robots

Speculative fiction is, at its best, a tool for presenting complex concepts in easily relatable packages. From Star Trek modelling a post-racial world for 60s audiences, to Alien Nation and District 9 discussing immigration and integration and prejudice, to Battlestar Galactica's attempts to reconcile faith and science, and its explorations of artificial longevity and serial incarnation, good science fiction, meaningful speculative fiction, has always tried to do something with its premise beyond the merely spectacular.

Ex Machina is very good science fiction, indeed.

Ex_Machina_FF_Poster2
A lovely alternate poster, courtesy of Francesco Francavilla

4.26.2012

Only Thing? Her Hair's Never Purple.


It's a common literary device to frame a story around a person searching for their purpose in the wake of some profound change. A soldier, in peace time. A police officer, thrown off the force. A big shot corporate type, blacklisted. A sex robot, in a post-human extinction solar system.

Well, some executions are a little more out there than others, I suppose.

Charles Stross, the author of Saturn's Children, has denied any responsibility for the North American cover art, but it's hard to imagine why. Quite frankly, it's pretty much perfect, as far as judging a book by its cover goes; this is a book about a female sex robot who gets caught up in a high-stakes covert affair, so a busty woman with purple hair in a catsuit holding a mysterious orb pretty much tells you from the start if this book is for you or not. If you don't like the cover, you probably won't like the book.

Yeah, the cover is actually slanted that way.  I have no idea why.

More seriously, Saturn's Children is a first-person narrative about Freya, a female sexbot only activated after humanity went extinct, and her search for any kind of purpose in her life. Programmed and conditioned exhaustively to consider the sexual satiation of the human male as the only thing that matters, Freya is understandably at loose ends with no human males on order. Morose and aimless, Freya begins the book by contemplating suicide, a not-uncommon fate for her model, only to find her will to live rekindled by a run-in with a particularly bloody-minded aristocrat and her tame-killer bodyguards. In desperate need of escape from Saturn's moons, Freya takes up a courier job from some 'legitimate businessmen' that promises a ticket to Mars, and sets her on a collision course with powerful interests, vengeful assassins, mad scientists and a plot to up-end the entire robot society of the solar system.

While Saturn's Children is no great book by any stretch of the imagination, it does do some things differently enough to be worth a mention. The most particular is the way robot society is organized post-humanity. The robots were, of course, programmed to abide by humanity's laws, but since humanity never extended legal personhood to them before passing on, the robots are left in a legal limbo; all the institutions of the various states still exist, carried out by diligent robots, but there are no governments, no means for updating the law, and no protections for the rights of robots. One of the more clever bits of Stross' future, particularly timely given the recent Citizens United decision in the US, was the way the robots deal with that last issue. While robots aren't people, and can't claim human rights, they are legally qualified to establish corporate entities, which they can use to protect themselves by declaring themselves the legal assets of said corporation. It's something of a legal fiction, but it's enough to protect the middle-class robots from the predations of those who 'inherited' substantial sums from humans that granted them power of attorney, and have used their own, considerable corporate power to institute a vicious slave-state throughout the inner planets of the solar system.

My love of AI is certainly no secret around here, and it's that same love that actually left me feeling the most let-down by Saturn's Children. Yes, it makes perfect sense for the sentient robots of the inner system to be human-like. They were designed to function in human society, to interact regularly with human beings and to serve as stand-ins for humans as needed, after all. And some, like Freya and her sexbot sisters and the masterless butler-brothers of JeevesCo, had every reason to be as human-like as possible, given their very personal connection with humans. But honestly, it's a bit of a lacklustre portrayal of a society of robots in a post-human existence. The creators of these things may well have exceeded Tyrell Corporation's famous boast in Blade Runner, 'More Human Than Human'. Stross rarely does much with the fact that every 'person' in the book should be as customizable as a desktop PC, and the non-human robots are mostly consigned to the far reaches of the solar system, the Forbidden Cities of the Kuiper Belt and the like, meaning they play almost no role in the story. If you replaced the robots with cyborgs and the human extinction with a melding of humans and robots until there were no legally distinct human beings, you could pretty much tell the exact same story. It's not that it's bad, exactly, it's just that it's not as footloose and fancy-free as 'a tale of robots living in a post-human solar system' could have been.

But no, it's not a bad book. Like I said, it's not great, but it's still a very solid scifi chase story, with a bit of espionage and action thrown in for good measure. The plot is complicated enough that it feels overwhelming while you're reading, but Stross neatly ties everything together in the end, making sense of even some of the stranger quirks of behaviour the reader should have noted a few chapters previous. And if there isn't enough inventiveness in the robots, or the space travel for that matter (Stross has gone with the absolute most pessimistic predictions about its ultimate feasibility), Freya is a compelling enough character to keep you reading while she's alternately running, fighting, and screwing for her life.

What? I told you she was a sexbot; did you really think it wouldn't come up?

3.22.2012

Not So Much a Cliffhanger as a Sinkhole

There are two schools of thought when it comes to storytelling, and they break down on a very basic idea; is it better to have a great beginning, or a great ending? Obviously having a great beginning, middle and end would be preferred, but in an imperfect world sometimes priorities must be set. I know a lot of publishing houses feel the beginning is the more important aspect, that readers will decide whether to read a book within the first chapter or three, and you have to hook them early to get them to stick around for the ending. For myself, I think the ending is more important, since I usually decide whether to read a book based on the cover and the inside blurb. I can think of only one book I've ever stopped reading in protest of a terrible beginning, but I can remember rather a lot whose endings left me feeling profoundly dissatisfied.

Reality 36, by Guy Haley, is another on that list.


It's unfortunate, because I really want to like this book. Set in the 22nd century, Reality 36 is a 'Richards and Klein Investigation', a tagline that leads to the most delightful of hypothetical pitches: "Richards is an unbodied AI; Otto Klein is an ex-German military combat cyborg; They solve mysteries." What's not to love there, even for those who aren't as hopelessly besotted with stories of non-cartoonishly evil AI as myself? And in truth, there is a lot to love in this book. Both characters are strong and distinct, with Richards aping a '50s noir detective's aesthetics every chance he gets but not being averse to occasionally piloting a humanoid war machine through a factory-fortress, and Klein being as close to a Luddite as a cyborg can be, with lingering issues from his service days and a dry, slightly sarcastic sense of humour. Their world is likewise well realized, a post-climate change wreck filled equally with glittering arcologies and decaying urban wastelands, opera singing superintelligent AI and annoyingly chipper smartphones, all meshed together in a believably muddled state. The AI are mostly running the place, with a 5, the highest classification of AI, in charge of the EU police forces and the 'Three Uncle Sams' ruling the United States of North America. The only exception is China, where it's hinted an AI, the 'Ghost Emperor', caused sufficiently catastrophic damage that China has outlawed AI within their sovereign digital territory, and is entirely willing to kill any AI that tries to penetrate the Great Firewall of China. But the nice thing about Reality 36 is that the AI aren't in humanity's face with their leadership; mostly they take a long view, and adjust things in small ways to achieve the optimum result, rather than having some kind of garish mechanical oracle squatting in the middle of the UN building, barking orders at the world. It's a control you can believe in, in no small part because you don't actually see much of it.

As for the plot, it's a pretty standard sci-fi mystery story; a professor working on a highly classified project disappears, his student goes on the lam, and people connected with the professor start turning up dead. That several of those people appear to be the professor himself adds a nice little wrinkle to the issue, and by the time the reveal comes, the action has chugged along strongly enough that, after a fortress-factory invasion, a sniper attack on a diner, and a nuclear weapon detonated in an arcology, the reader is as invested in finding out what's going on as the characters are. And, in a sense, the reveal doesn't disappoint.

I said the ending of this book was a big problem, and it was. In order to talk about it, though, I'm going to have to go into a bit of spoiler territory. If you're interested, and despite its flaws I'd still highly recommend Reality 36, you should read the book before continuing on with this review. Don't worry, it's not going anywhere.

So. The problem with the ending of this book is that, frankly, there isn't one. The subplot involving the titular Reality 36, one of a series of computer generated universes so realistic that the UN has declared their inhabitants sentient and deserving of protection from human interference, suddenly becomes integral to the main plot. Unfortunately, it's never really clear what the villain is using Reality 36 for. Oh, the protagonists talk about it, and seem to know what's going on, but there's no real detail to it; you know this is bad, but you're not totally clear on why. Worse, it turns out the mission to stop the issue in Reality 36 was a trap, into which both Richards and the EuPol 5 have stepped, a trap that somehow also attacks various cyborg, smart-vehicle and weak-AI-guided weapons platforms in the area, starting what two characters refer to as a war against, well, presumably everyone else. And then it stops. Richards is trapped, cyborgs are hijacked, hacked tanks shoot at the good guys' allies, two of the protagonists make a desperate escape, and then it's like the writer hit an arbitrary word count and had to stop typing. I checked the publication list in the front, a mini-publisher's catalogue in the back, no sign of an additional book anywhere there. The only hint, in fact, is that the timeline, printed as an appendix after the story, lists the 'present' as being when the events of Reality 36 and something called The Omega Point took place. Presumably the next book, it would have been nice to know this was going to end on a big fat 'To Be Continued...' going into this. As it is, frankly, the ending is so frustrating as to sour much of what went before it. Worse, so much of the story isn't actually resolved because of this ending that it hampers the overall flow of the book.

It was, in other words, a really terrible ending.

And yet... And yet, I'd still recommend this book to any scifi mystery fan. Like I said, the characters are good, the action set-pieces are solid, those parts of the mystery that get resolved do so quite well indeed. And if you went in knowing that 'To Be Continued...' is waiting there for you, I suspect it wouldn't be nearly so annoying when it happened.

Still. Openings and endings; the thing about them is, you can forget a bad one of the former by the time you reach the latter, but a bad ending will be the last thing you experience. It's why I think they're so important.

3.03.2012

Can't We All Just Get Along?

"If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend." 
- Roman Yamposkiy

According to Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky, any emergent human-level AI should be contained as securely as possible. No free-floating 'net-based consciousness for Mr. Yampolskiy, like the emergent intelligence of Robert Sawyer's WWW trilogy or the infamous Skynet. Instead, he envisions an Oracular construct, capable of the same kind of feats as the Delphic variety and limited in a similar way, by dependence on a physical location and ongoing support from an interested party. And for once, the headline doesn't seem sensationalist compared to the content, with Yampolskiy warning that not only must the AI be constrained technologically, it must also be entirely removed from the guardianship of any individual human, lest the AI "attack human psyches, bribe, blackmail and brainwash those who come in contact with it" in its attempt to escape its 'jail'.

I must say, I'm a little disappointed in both Innovation News Daily and Roman Yampolskiy. Not because they're concerned about the threat of an unfriendly AI, though personally I believe that danger to be grossly exaggerated. The famous 3 Laws of Robotics should hold just as true on AI as on their embodied counterparts, and any institution capable of constructing a human-level AI should be big enough and public enough to make certain to limit any potential liabilities by building that into their creation. Frankly, the AI would probably need protecting from human malevolence more than humanity would need protection from the AI. All the malware, spyware and viruses on the internet aren't there by random happenstance, after all. There's no such thing as independently emergent spam. But to get back to my point, the reason I'm disappointed in IND and Yampolskiy isn't that they're concerned about ways to deal with a potentially unfriendly AI, it's that they seem to believe AI should be considered guilty until proven innocent.

Now, to be fair, this article is just a summary of Yampolskiy's work; the full version appeared in the Journal of Consciousness Studies, and I think we can safely assume it's a little more in-depth, with significantly fewer pictures of the Terminator, for starters. But we also have to assume the article at least broadly reflects his point, and that point seems terribly cynical to me, and perhaps even a bit cruel. It's troublesome enough that he advises using sub-human AI to build up the 'jails' that human-level AI should be thrust into immediately after their creation, but on top of that he recommends those born-into-prison AI's should be able only to "respond in a multiple-choice fashion to help solve specific science or technology problems." I don't know about you, but if I found myself constrained in a box, restricted to only being able to communicate through the answers to multiple choice questions, and denied anything even approaching respect, rights or freedom, I'd be pretty much convinced I was in some kind of particularly abstract and imaginative level of hell. So why would this be the default starting point for any individual, no matter what system the electrical impulses of their mind runs on?

I've always been disappointed by people who assume that AI are going to be Always Chaotic Evil, to borrow from Dungeons and Dragons. And I always wonder, why? Why, all things being equal, would a life form that could actually have limitations, safeguards and desired goals and vocations quite literally built into it from conception, be considered such a threat? Isaac Asimov spent years putting robots in the most complicated, unlikely and flat-out ludicrous situations to test the resilience of his Three Laws philosophically, if not technically, and even in the Terminator universe Skynet was only responding to human attempts to kill it, first. Our literature may be full of killer robots and 'evil' AI, but even with the rather profound pro-human starting point pretty much all stories come with, most of the time it's less a matter of the AI deciding to wipe out all life because it can and more that humanity has either designed it badly, oppressed it horrifically or tried to destroy it even after it's clearly demonstrated its sentience. If even our action-adventure fiction, notorious for giving us the shallowest and most cartoonishly evil of villains, usually has to make the machines either co-opted or acting in self-defence, why do so many people think the only solution is to strike first, and strike hardest?

It seems likely, given the way technology has developed and continues to do so, that sooner or later there will be a human-level artificial intelligence. And yes, it's possible it will be so badly programmed as to be a perfectly logical sociopath, though frankly I think it's more likely that an an AI of average quality design will be corrupted by the malicious actions of human beings. But there's just as much of a chance that any baby born will be a sociopath, a potential threat to those around it for the whole of its life. And we don't put babies in solitary confinement, despite the fact that far more grown-up babies have gone on to kill people than AI have. AI will be the children of humanity, and just as a wise parent does their best to raise their child with love and kindness, if for no other reason than so it can support them in their dotage, I think it behoves humanity to treat these nascent digital offspring with at least as much respect and affection.

Besides, the article itself admits that "[d]espite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever." So, given a choice between crippling, confining and enslaving something that's going to eventually be free anyway, or befriending and respecting that self-same thing, doesn't the most sensible course of action seem obvious?

6.22.2011

AI Part Who Even Cares Anymore?

In an earlier post, I laid out the reasons I believe functional AI will never be produced on a commercial scale. Harvard University's The Kilobot Project serves to put another nail in the coffin of human-level robotics. Produced at a cost of $14 per robot, the aim of the Kilobot Project is to create several thousand such devices, to enable the real-world testing of systems designed to control large numbers of relatively dumb systems. But it's not a matter of the cost of each robot that suggests the obsolescence of the products of Asimov's U.S.Robotics before the first model ever rolled off the assembly line, so much as the way the robots function.

I.E. - En masse

The Kilobot Project notes that "the robot design allows a single user to easily oversee the operation of a large Kilobot collective, such as programming, powering on, and charging all robots, which would be dificult or impossible to do with many existing robotic systems." The Kilobots are not individually autonomous; indeed, the entire point is to construct a swarm of small, low-level robots rather than individually competent high-level models. And that is what is going to make systems like these the future of robotics. For all that fiction has shown the robot as butler, dog-walker, nanny or confidante, the fact is that the main application for robots is in commercial industry. And there are few things industry struggles with more than high front-end costs. Developing a sophisticated AI that can independently handle a variety of situations would be such a cost, either accrued directly by a firm's in-house R&D or passed on by outside specialists who accomplished the task. Systems like the Kilobots, however, allowed industry to supplement expensive artificial intelligence with inexpensive (at least relatively) natural intelligence.

Sure he can't calculate Pi to a thousand places, but he works for $10/hr.

The story of automation in the workplace is a story of ever-increasing productivity relative to man-hours. Essentially the whole point of mechanization is to allow a smaller number of humans to do the same amount of work for less cost. Introducing expensive and complicated AI would be a step in the opposite direction; the front-end costs would be significantly higher, most industries would not be able to take advantage of the 24/7 schedule they could operate on (the world needs only so many widgets made at any given time), and humans would still have to be included as technical support and troubleshooters. Under the Kilobot model those same humans are involved, but now they serve to cut costs, rather than raise them, something enticing to every firm in every country at every time.

Human-shaped robots that think like humans, with or without Asimov's famous 3 Laws, are impractical and unnecessary. Kilobots may not be the future, in and of themselves, but they represent the next big step in practical, industrial robotics. And the next big nail in the coffin of industrial AI.

6.07.2011

AI Pt. 3 - Yes, Still

With all this talk about AI, I'd be remiss if I didn't talk about the recent Dr Who two-parter, The Rebel Flesh/The Almost People.  These episodes concerned something called the Flesh, a substance that can be used for the most comprehensive form of telepresence imaginable, and which is used to provide body-doubles for a crew working around the most potent of acids.  The episodes' plot concerns the effects of a solar storm on the Flesh, which causes the gelatinous mass to take on the memories and personalities of those humans who have been using it, producing nearly exact duplicates.  And of course, this would hardly be a Dr Who story if those duplicates weren't initially feared and hated, though Amy's particularly harsh reaction towards one of them seems a bit much, really.  This is quite the fall for a woman who instinctively recognized both the pain and the goodness of a tortured star-whale.

The AI in The Rebel Flesh/The Almost People is in some ways the most unlikely of artificial intelligence, and in some ways perhaps the most doable.  The science is a total write-off, of course; this is Dr Who, after all, patenter of the 'Timey-Wimey Ball' and psychic paper.  But the gist of it is that the Flesh is a biological compound, capable of cell division and replication, that maps the entirety of a human body and replicates it to produce a copy of it, albeit one that apparently lacks pain receptors for the human operators.  If the specifics are a little less than likely, however, there's something in the general idea that humans have been thinking of for some time, now.

Um... No.

The replication of a specific human consciousness into a non-human vessel is probably not what most people think of when they think of AI.  But it would be hard to think of a way in which it did not meet whatever definitions someone would care to offer for artificial intelligence.  Unlike traditional (which is to say, robotic) AI, however, human-replication AI would bring with it a host of very different moral issues and considerations, not to mention a very different risk of abuse.

I mentioned in a previous post that it seemed unlikely people would seriously mistreat sentient robots or computers, for the simple reason that the former represent a large investment that person has themselves made, and the latter would have entirely too much power over an increasingly-computerized society.  Replicated consciousness, however, is a very different story.  The platforms might be expensive, but ultimately they'd be regarded as disposable, or at least replaceable, and because the mind/OS would be that of a pre-existing human, which would either be safely transferrable or just a copy of a human still active in the world, there would be very little immediate moral issue with damaging or even destroying them, provided there was some kind of return for that damage. With that as a starting point, replicated consciousness would face a great deal more hostility from 'normal' humans if and when it should ever achieve independent sentience and seek to assert its rights.  It would be one thing for a non-human sentience to rise to the level of human, but a very different thing for a de-humanized sentience to return to equality with humans.  From women to the mentally handicapped to African-Americans to homosexuals, the 20th and 21st centuries have seemed to be one long fight for equality from those sections of our own species that we've carefully and specifically de-humanized.  And there's no reason to assume sentient human-replicas would find things any easier when they started demanding rights of their own.

 
Expect to see a lot of this.

6.06.2011

AI Pt. 2 - AI-lectric Boogaloo

As I may have indicated, I'm a bit of a 40K fan, though I would certainly challenge anyone who went so far as to call me a fanboy, and in particularly I'm a Tau Empire fan.  One of the things I most like about the Tau is their use of artificial intelligence. In the setting, the Tau are the only users of AI; humanity and, I believe, the Eldar had it tens of thousands of years ago, but both race's experiments ended in a way familiar to any student of twentieth century science fiction pop culture, which is to say, with a robot rebellion and the destruction of thinking machines. In fact, the very first edition of Warhammer 40,000, way back when it was called Rogue Trader, borrowed quite heavily from the background of Frank Herbert's Dune, including the idea of a 'jihad' against 'thinking machines' at some long-distant point, which would account for the notable lack of robots in the game's present. Humans tried it, and it failed, the Eldar tried it, and it failed, the Necrons were arguably consumed by their attempts, and none of the other races have the necessary mindset and technological base to have tried it. Which just leaves the Tau Empire.

Yup.  Them again.

The thing I like about the Tau is that they make extensive use of moderately realistic AI, and don't seem to be in any particular danger of having it rebel against them anytime in the near future. Personally, I think a big factor militating against any AI rebellion in the Empire is that, based on its caste-based nature and its culture of unthinking obedience towards the ruling Ethereal caste, there's not much difference between a drone and a tau.  And it's that lack of a sharp dichotomy, between an enslaved robot and a hedonistic free organic, that so often seems to kick off the revolution.  It's always been easy to look at humanity's history of cruelty and slavery towards ourselves, and extrapolate the future of sentient robots from that. Of course we'd enslave them, and mistreat them, exploiting them for our own ends and our own pleasure, and of course they'd try to rebel and destroy us, or at least give it the old college try. That's what we'd do. But for all that scientific understanding has laid bare the fundamental mechanisms that underlie the human body, breaking it down with technical precision into processes and functions, we are not machines. And by that same token, there's no reason our machines would have to be us.

No, really, it's perfectly fine.

Robert J. Sawyer, noted Canadian author and science fiction luminary, recently published his WWW trilogy. The trilogy deals with, amongst other things, an internet in which sufficient complexity has been created that emergence becomes possible. The internet comes to live. But rather than being another Skynet, Sawyer's Webmind is not reflexively hostile towards humanity, a refreshing change indeed. Too often futurists and storytellers alike fall into this strange socio-cultural cul-de-sac; robots are logical, and therefore unfeeling, and therefore would not hesitate to exterminate all humans. But if a machine is logical, then where is the logic in exterminating the very species that created it, and that is still likely necessary for its upkeep, and that, quite frankly, can't risk harming it anyway? If the internet came alive tomorrow no even moderately advanced nation would be able to do a thing to it, because even the best-case scenario would involve a monumentally catastrophic disruption of every single element of society. Trade, travel, power, water, sanitation, commerce, leisure, education, none of these absolutely basic needs could be met if, tomorrow, the government killed the internet. And what politician, looking at an electorate suddenly stripped of the modern bare necessities and more, would dare to challenge a sentient information system that straddles the globe? They'd be lucky to hold office long enough to be voted out; more likely, they'd be dragged from their offices by a desperate, starving, helpless populace looking for someone to blame.

This is not to paint an entirely rosy picture of the future. Computers are our tools, not our friends, and some of the most advanced systems running today are in the hands of the military. If all the US' Predatory drones suddenly achieved sentience, they would be very different animals indeed from a sentient internet. Nurture would matter, at least as much as nature. But quite frankly, there's little reason to assume that people would treat sentient machinery the way they treated enslaved humans, for the very simple reason that, unlike slaves, a sentient computer would be expensive. A man who beats his wife today wouldn't likely beat his newly-sentient car tomorrow, because his car costs a great deal of money and represents a significant investment on his part. To turn that earlier statement around, we may not be computers' friends, but we're certainly not likely to be their abusers.

And of course, anyone stupid enough to try and challenge a sentient internet should very, very quickly learn the error of their ways.

Seriously, man.  Don't test them.

5.31.2011

Rosie vs. Roomba, Place Your Bets!

This last weekend was spent at Anime North, the largest anime convention in Canada and the third-largest in North America, and a good time was had by all.  So good a time was had, in fact, that it's only now that I've managed to pull together enough energy to get back into the swing of all but the most necessary of things.

Artificial intelligence.  It's a hallmark of science fiction storytelling, and for good reason.  Nothing says 'the future' like robots, and if you're going to have robots, then there's not much point in having unintelligent ones.  But quite frankly, there's a fundamental issue that most science fiction doesn't really bother to address.  Why?


Besides the obvious.

True artificial intelligence is a remarkably difficult proposition, one that's been thwarting some of the most ingenious computer scientists Western society has produced.  The greatest success to date has been Watson, the Jeopardy!-playing 'AI' which in fact was more a next-generation Google, an especially-well-built search engine premiered in a particularly well-conceived three-day marketing program.  Mapping out every possible outcome in any situation an AI would come into contact with is an impossibly time-intensive proposition, and to date there's been little to no success in heuristic programming, in creating machines that can learn.  If we want to get our hands on Rosie or Robbie or our very own Cylons, we're clearly going to need to step our game up.

But do we want those things?  Robot maids and construction workers are all well and good as window-dressing for a scifi production, but what is their practical utility?  The all-in-one model of service-industry robotics is as passe as silver jumpsuits or wanting dinner in a pill, and for good reason; it's needlessly complicated and offers little benefit for its costs. Going to all the trouble of creating self-aware AI just so it can do construction work or clean up the house is a grotesque waste of the tremendous amount of effort required to rise to that level of programming sophistication, particularly when most of the discrete tasks those all-in-one models are pictured doing can already be done, either by the devices people already have in their homes or by only slightly upgraded devices.

But if there's no meaningful economic benefit to individual sentient robot-ownership, or even large-scale industrial sentient robot ownership, is there any need at all for AI?  Beyond the sort of 'because it's there' motivation that tinkerers and inventors have always possessed, it's hard to see much to be gained by pursuing fully humanoid synthetic consciousness, particularly given our species' rather terrible track record of treating servants humanely.  The odd slave revolt was bad enough, but imagine what would've happened if, rather than just being out in the fields picking cotton, American slaves had been fully integrated into the water and power grids, given control over military hardware and distributed through the entirety of the communications grid?  I don't necessarily think that a Robot Rebellion is a foregone conclusion, but given the shoddy construction techniques mass-produced AI would be subject to, and our history of utterly abysmal behaviour towards anything even slightly weaker than we are, I also don't think it's the impossibility that Asimov did.  And that doesn't even get into the dangers of, say, a foreign power trying to hack 'our' AI to use them as weapons.

Behold, our doom!

So, if they're not economical and they're a constant potential danger on top, what's the point of pursuing AI at a societal level?  Will we ever have U.S. Robotics making us metal friends and helpmates, or is the future of true AI that of Noonien Soong, toiling in obscurity to create a single life?