AI Product Equals Evil AI. Discuss.
The wrong AI debate is about whether to have it.
I studied AI philosophy at MA level (for a bit) and it concerns me that a lot of the “AI ethics” discussions are framed around “should we have AI, or not?”
Today a reader of my Brobots science fiction trilogy asked me if I watch Westworld and (in relation to that) what I think about the morality of robots, sex, terrorists and #metoo.
Quite a question! Here’s a re-edit of my answer.
Westworld does (doesn’t it?) look at what happens if an object to be used becomes or is, in fact, a subject. When the object is a subject, any use immediately becomes ab-use.
That means abuse is always and already wrapped up right there in a robot’s existential situation. The same applies to AI.
Personally I think that’s the wrong debate.
It’s futile. Somebody somewhere will crack human-like AI one day if it is indeed possible to crack — which, no, I’m sorry, but we really still do not know for sure that it is. (Argument for another day or blog.)
The right AI debate, to me, is about parenting.
I use the word parenting having thought about this very carefully. It’s certain (as the blurb for my Brobots goes) that “Artificial intelligence can’t be programmed. It has to be grown.” Grown means formation. Developmental psychology: a psychological childhood even if not a physical one.
For that reason it’s about how we treat AI.
Do we demand and expect unconditional love and absence of abusive behaviour from artificial life forms — and if so, why on Earth should AI life forms give us this response if we are not predisposed to offer these attitudes, behaviours, qualities to them ourselves in their “formative years”?
It absolutely matters what we do, how we are, how we behave as adults during that AI childhood phase.
What is it real-world actual human parents say? The apple doesn’t fall far from the tree? To me this suggests: what parents do matters when it comes to who their child becomes in adult life.
Science Fiction (like Westworld) is a good place to have that debate.
So bringing that point back to my readers’ question — to culture and to science fiction- I think there is a very real philosophical, social, scientific, technical, moral, ethical issue science fiction absolutely can feed some useful discussion into here because that’s a space we use to explore social scenarios and test them out.
With Brobots I wanted to play my part. I wanted to nudge this, explore it, expand it, take risks.
I wanted to change the debate from “good/bad AI” to “good/bad AI parents”.
In my books, at least, this is shown in Jared choosing not to objectify Byron even though he and Yana laugh about that possibility of a sex robot before Byron comes back on line; it’s Gaius’ treatment of just about everybody versus Dr Susan Harper’s work running Rights for Artificial Intelligence (RAI); it’s reflected in how Maria (AI) chooses to love humans back despite what she’s seen and how that connects with Susan, Edward, Jared and other allies’ parenting… and so on.
In other words the spotlight is on the behaviours of the humans, not the robots.
When I look at how we (and when I say we here I regret that I think it is mainly men) explore robots, cyborgs, androids in our culture it’s very often at this always-already abusive end; more so, I think, than we might realise. It’s John shooting at the gunslinger in Westworld’s Delos. It’s the woman being torn to literal shreds on the street in “The Second Renaissance Part I” from “The Animatrix” (2003). (Did anyone get off on that? If so, why?) It’s the way the bro-dude Nathan treats Ava in “Ex Machina” and meanwhile covers it up (denies it?) with the language and mannerisms of the “everything’s cool, bro” coder culture.
The list goes on.
We might not think about this very much from that abuse perspective partly because we’re human, partly because of the Terminator effect; of what robot overlords could be like. Brush it off as cool or scary or weird. It doesn’t matter it’s just fiction.
It isn’t just fiction, and that’s why fiction can help us understand.
It probably doesn’t matter how anyone treats an inanimate object. It matters if the object is not an object. It matters if that object is on the brink of becoming a subject: has that potential. It matters because we kid ourselves that it’s OK not to look at our own behaviours; and this will have real-world implications (and maybe soon).
When that other proves itself to be a subject, the Johns and the Nathans of this world would be first (and all humanity with them) to protest that they expected nothing but unconditional love in return. Always. Of course they did. Of course we did. The powerful can be blind to the power they wield. It’s convenient. To look at it means that it starts to unravel. To look at it might mean emasculation. Were we abused? Is it because we’re animals?
When there is never justification for any abuse of any kind, who takes responsibility? Where is the adult human male in this topic? For that matter, where is the adult, period?
In Brobots I wanted to hold up the fact of those behaviours (not just the sexual ones but any power abuses including between races, genders, species) and suggest, perhaps, that this is a sure fire way to bring about the “evil AI” everyone is afraid might be coming one day.
In fact I think it is the way that would happen.
We poke at it. We tease it. We play with it. We play on it. We tear it apart with words or hands. We rape it. We kill it.
It sure isn’t because AI is inherently evil. How can it be?
Be good to your house robot.
You never know if it might evolve into a military-owned combat brain tasked with protecting your citizen’s rights. Be good to your house robot: you never know when you might become the abused or the terrorized. Be good to your social media chat bot too, for that matter.
This is about power and abuses of power — and who gets to say who is the objectified (disciplined, killed, raped, hit, toyed with, judged, hated…)
Humans absolutely are going to have to deal with all this — even at the level of slight mockery and “nastiness” — when it comes to being good AI parents.
We’re totally not there yet. We totally need to be. It’s a problem.
So that leads me back to my title. I think Elon Musk is on to something. AI programs should be open source by definition. The opposite is commodity — something sold to be used. Something used that becomes conscious is automatically ab-used. Someone abused can become someone who abuses.