Wednesday, January 30, 2013

The Windup Girl (your e-mail)

As usual, you guys sent in some thoughtful e-mail about "The Windup Girl" post from last week on future relationships between human and androids (when androids became essentially indistinguishable from us). Unlike usual form, I'm going to excerpt these anonymously, because the last thing anyone needs is their name coming up in a "human relationship with an android" Google search.

Well, except for me. I already had my signature Google search moment, when I was #1 for "monkey scratching his butt then sniffing his finger and falling over." #1, baby!

All, right, let's get to the e-mail. Starting off:
I think it sounds great. 

Interpersonal relationships aren't for everybody and as easy as it is to say "just get out there and get better at meeting people" some people can't do it. The downside to an artificial partner is ownership - the people that get off on owning "someone" else shouldn't have one to begin with because it'll re-enforce a character flaw (in my opinion), and the people that don't will always be nagged by the fact that they can just turn off their lover if it pisses them off. 

I still don't know where I fall on the debate surrounding "is 95% human close enough to be called a separate sentient entity?" If a robot and live and love and create (especially create!) should anyone be able to control it at all?

I was struck by how many of you were protective of androids and their rights.

I wouldn't see an artificial life form as any different than a human if it had human like capabilities, even if it didn't have a human like form. I'm really excited about AI precisely because I see autonomous digital consciousness with access to manufacturing as the best chance for there to be something that resembles life after we've finished messing up the planet. So I wouldn't treat it any differently than I would another person, because I don't think you can meaningfully distinguish at that point. And I'd be firmly on the side of robot rights and equality given the huge set of ethical concerns such an advance would bring.

I didn't mention the Turing Test (I should have, but I was focusing more on relationships), but here's a thoughtful e-mail that uses the Turing Test as a jumping off point:

Interesting read of your thoughts on the Windup Girl. I notice that you didn't mention the "Turing Test" proposed by Alan Turing, who many consider the father of modern computing and artificial intelligence. At the risk of claiming the next couple of hours of your life reading on Wikipedia about this fascinating fellow, I encourage you to look him up (in addition to the aforementioned achievements he played an important role in WWII code-breaking, underwent chemical castration as a punishment for being homosexual - in the 1950's mind you - and eventually committed suicide, a tragic end for possibly one of the greatest thinkers that we've ever had).

In a nutshell, the Turing Test was a hypothetical scenario detailing exactly what would be required for a computer to trick humans into thinking it was a human. Turing assumed that the only interaction was via a computer terminal, to avoid limitations regarding appearance etc - he envisaged a pure conversational test.

Turing's belief was that if a computer could make humans believe that it was a human, and that it could do this just as successfully as a real human on the other end of the computer line, then we would have to accept that the computer can "think" or that it has "intelligence" (whatever those things mean) just as legitimately as a human. Because after all - how can we know that the other humans around us can indeed think? We talk to them, and based on their responses, we conclude that they can. So too must we extend this courtesy to a computer, or in this case an android.

I agree with Turing and think that if such a thing were possible, we would have to class such androids as "people".

However, I believe that we are a *long* way off this yet. They actually have an annual competition for the Turing Test. Sometimes you hear some company or another claiming that it's test has broken some record percentage of people fooled, however if you read the conversational transcripts you will often find that the same people often also believe that the *real humans* are actually computers. I would conclude that such people are quite possibly actually descended from some form of potato, but anyway.

I think our current approaches towards true artificial intelligence are flawed, and based on programming every possible scenario into the computer instead of having anything to do with true intelligence. Are humans themselves simply machines that have been pre-programmed with a large number of prepared conversations and responses? Some people will argue that this is the case, but I think human (and higher animal) intelligence runs much deeper. Turing himself proposed what I think will be the eventual solution - instead of trying to make more and more complicated computer programs that can emulate the reactions of a human adult, we should be emulating the brain of a newborn, a blank slate that can convert "inputs" into real intelligence. Bombarding a newborn with information is, after all, the only way we know of that creates true intelligence, so it seems like the best place to be trying.

I differ from the commenter in that I think we're not far away from an artificial intelligence that passes the Turing Test--a decade, at most. And I think Roger Bannister is applicable here, even though he broke a physical barrier (the four minute mile) instead of anything related to artificial intelligence.

Before Bannister ran 3:59.4 in the mile in 1954. Before that time, it was widely considered impossible that a human could run that far, that fast. The world record had hovered just above four minutes for nine years before Bannister achieved the "impossible" (and it's a great, great story that I read when I was a kid. It made such an impression on me that I remember the last names of his pacers for his record-setting mile, I think--Chataway and Brasher).

Bannister's record lasted less than a month. John Landy ran a 3:58.0. In just over four more years, the world record was under 3:55.

I think the Turing Test is going to be similar, in that when someone achieves success, there will be others, and quickly. Someone else e-mailed in with a potentially interesting problem, though:
I might be wrong, but I think you would run into an uncanny valley problem here. 95% is so close that the 5% that is missing will seem huge. I know uncanny valley normally applies to the looking like a human, but I think it would also apply to acting like one too.

That may well be true.

Finally, this last set of thoughts is from one of my favorite e-mailers, someone who is always thoughtful and provocative:
The Windup Girl sounds like a very interesting book. In answer to your questions, assuming we're talking about androids who truly are nearly human -- a big if -- then yes, if the android makes us happy, that ought to be okay.

But more likely than not, human prejudice and androids’ growing awareness of their situation will drive the androids to revolt; that business with “Blade Runner” and "Battlestar Galactica" is no fluke, in my opinion. And not even Asimov's Three Laws of Robotics could stop human fear and suspicion; consider that even Asimov eventually had his robots go into hiding in his later Foundation novels.

In the short-to-medium term, I suspect people who have relationships with androids would be publicly viewed as freaks, even if others might secretly envy them. And some demagogue is sure to try to make such relationships illegal, saying that they threaten the institution of the traditional human family. Consider how long it took the United States to do away with laws banning interracial marriage; the Loving v. Virginia ruling was in 1967, and Alabama didn't officially repeal its anti-miscegenation laws until 2000. Or consider the attitudes the country had, and still has, toward homosexuality. Both relationships involved people, but prejudices made society view the participants as something less than human.

So given human prejudices, why do I think it would be okay if androids make us happy? Consider Asimov’s “The Bicentennial Man,” in which the main character, a robot, first shows his unusual nature by carving a block of wood into a work of art. Even though no one is willing, at first, to acknowledge the robot as being equal to a human, everyone is delighted to buy the robot’s creations.

Or consider Bradbury’s “I Sing the Body Electric!” about an android who becomes a surrogate grandmother for three children who have recently lost their mother. The youngest, a girl, resists the grandmother’s numerous kindnesses until she finally understands that the android will never die, never leave her feeling abandoned the way the girl felt when her mother died. Then, and only then, does she feels safe to show her love.

I believe effective immortality is a quality that people who have lost a loved one would value, to an extent that people who haven’t been in that situation could probably never understand. In Rumiko Takahashi’s “Maison Ikkoku,” a young man, after many travails, wins the heart of his widowed apartment manager, who was devastated by her husband’s death. As she marries the young man, she asks that he promise that, even if it’s by one day, that he outlive her.

I think back to the first “Star Wars: Knights of the Old Republic,” and a widow on Dantooine who had fallen in love with her droid. I think back to “Blade Runner,” whose replicants truly were “more human than human.” I think back to “Persona 3 FES,” and the android Aigis; I remember the anger I felt when one of the characters made fun of her mechanical nature; I remember the lump in my throat when I saw Aigis crying for me. I think back to “Analogue: A Hate Story,” with its AIs who felt emotions like love, jealousy, hate, grief; who lied and shaded the truth; who were, to all intents and purposes, human, despite not having physical bodies.

On a side note, I do wonder whether something like the uncanny valley would apply to androids approaching human behavior. Would an android that was 75% human be perceived as creepy, whereas a 50% or 90% human android wouldn't? Would androids in that uncanny valley end up deepening human prejudices against all androids and make the acceptance of 95% human androids more difficult? I must admit, even though these are old issues in science fiction, the questions remain fascinating.

Thank you, as always, for the terrific e-mail. I am fortunate to be around such consistently thoughtful people.

Site Meter