Tuesday, April 26, 2016

When a Robot Kills, Is It Murder or Product Liability? Ryan Calo and Paolo Bacigalupi in the spotlight

This short story was commissioned and edited jointly by Future Tense—a collaboration among Arizona State University, New America, and Slate—and ASU’s Center for Science and the Imagination. It is the first in Future Tense Fiction, a series of short stories from Future Tense and CSI about how technology and science will change our lives.                          
They launched a new series on ! leads off with "Mika Model."
 

When a Robot Kills, Is It Murder or Product Liability?

An expert on robotics law responds to Paolo Bacigalupi’s short story “Mika Model.”


 





 

“Mika Model”

A new short story from sci-fi great Paolo Bacigalupi. 

160425_FT_Mika-PromoMika
160425_FT_Mika-Model
Lisa Larson-Walker
This short story was commissioned and edited jointly by Future Tense—a collaboration among Arizona State University, New America, and Slate—and ASU’s Center for Science and the Imagination. It is the first in Future Tense Fiction, a series of short stories from Future Tense and CSI about how technology and science will change our lives.

When a Robot Kills, Is It Murder or Product Liability?

An expert on robotics law responds to Paolo Bacigalupi’s short story “Mika Model.”


160426_FT_Robot-Liability
Lisa Larson-Walker
FT_futurography-logo
Ryan Calo responds to Paolo Bacigalupi’s short story “Mika Model.
I am a law professor who writes about robotics. I’m also a big Paolo Bacigalupi fan, particularly his breakout novel The Windup Girl involving an artificial girl. So for me, “Mika Model” was not entirely new territory. For all my familiarity with its themes, however, Bacigalupi’s story revealed an important connection in robotics law that had never before occurred to me.
Advertisement
“Mika Model” feels plausible. The robotics is very advanced; engineers can tell you how difficult building a real Mika Model would be. Mika has leapt across the Uncanny Valley, lacking even the puppetlike movements of Bacigalupi’s earlier character Emiko. She would kill a Turing test.
But the trends that underpin the Mika Model are well underway. Japanese roboticist Hiroshi Ishiguro painstakingly imitates the human form. Research in human-computer interaction suggests social robots are capable even today of playing on our emotions and instincts. And interconnected robots already share knowledge with the cloud.
The model of law is also familiar. Mika Model is manufactured and leased by a company called Executive Pleasures under a complex terms of service agreement. But Mika appears to have a mind of “her” own. So Bacigalupi’s protagonist, Detective Rivera, finds himself asking a canonical question in robot law: When a robot kills, is it murder or product liability?
A criminal analysis would ask whether a robot could form the requisite mens rea, i.e., intend the killing. Mika suggests we are all just soft machines; robots and people start with innate protocols and supplement with experience. She also claims to experience pain and other emotions. But is this commonality enough to support intent? And exactly what forms or theories of punishment apply?
The product liability question is also complex. Perhaps this particular Mika Model is defective, in which case strict liability is appropriate. Or perhaps the victim misused the product by torturing it. And how could the company foresee that a Mika Model would act in this way? There have been deaths, but they have always been linked to user “stamina issues.” Maybe Executive Pleasures avoids liability this time—but loses if it ever happens again.
What’s driving everything, of course, is just how hard Mika is to characterize. She’s a machine. But she has a social valence, a common robot quality with which the law already struggles. Courts have had to decide, for instance, if the all-robot band in Chuck E. Cheese should be taxed as a “performance” or if a toy robot represents something “animate” for purposes of import tariffs. The legal scholarship continues to debate whether, as the eponymous executive Holly Simms suggests in the story, “It’s better if you don’t anthropomorphize.”
In the end, the reader cannot help but anthropomorphize. We titter at the sexual chemistry with Rivera, anger with the revelation of Mika’s suffering, and recoil at Simms’s fatal screwdriver. Nor can we help but to de-anthropomorphize, to appreciate how programmable each of us can be.
It was simply thrilling to witness a master like Bacigalupi set these and other familiar themes to life. I intend to assign “Mika Model” to my students. But the story did more than entertain. Until reading it, I had never quite appreciated something important: There is a fundamental similarity between the question of whether a robot can be responsible and the question of whether a robot should enjoy rights.
What makes the final scene of Bacigalupi’s story so shocking—at least to me—is the fundamental lack of process. More than anything I wanted the action to slow down, to pause for reflection. I wanted to hear what this court-appointed lawyer would have said in Mika’s defense, both against the criminal allegations she might face and against the claims of corporate ownership.
Fueling this intuition was not merely that Mika imitated life but that she claimed responsibility. Rights entail obligations. If I have a right, then someone else has a responsibility to respect that right. I in turn have a responsibility to respect the rights of others. Responsibility in this sense is a very human notion. We wouldn’t say of a driverless car that it possesses a responsibility to keep its passengers safe, only that it is designed to do so. But somehow, we feel comfortable saying that a driverless car is responsible for an accident.
To talk of a machine as truly responsible for wrongdoing, however, instead of merely the cause of the harm, is to already side with Mika. For if a machine is a candidate for responsibility in this thick way, then it is also a candidate for the reciprocal responsibility that underpins a right. The question of whether Mika intends to kill her owner and the question of whether she is entitled to a lawyer is, in many ways, indistinguishable. I see that now; I had not before.
A definition I like says good art is at once surprising and inevitable. “Mika Model” is good art in this way. I knew the tropes of the story and the questions of law and society they tend to drive. I knew the author’s previous work. But for all of the inevitability surrounding a story of robot crime, Paolo Bacigalupi once again manages deep surprises.
This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:
Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.

No comments: