The Fourth Law.

So...Kristoff is an Eastern European Prince...strikes me as a little cheap that he called an Uber.

So…Kristoff is an Eastern European Prince…strikes me as a little cheap that he called an Uber.

So, a bit about the art before the Very Serious post content today. Kristoff Vernard is a character that hangs around with Doctor Doom, ruler of a fictional totalitarian state in Eastern Europe. He first appeared in the eighties, as an “adoptive heir” to Doom during the Byrne run. He was pretty much the Prince of Latveria, and has since aged to be about college aged, I think, possibly older. It’s very unclear.

Still, you’re an Eastern Eurpoean Prince, and doing some sort of business with an American superhero, you think you would have your own car and driver, and not call an Uber. It’s unclear why exactly Kristoff is hanging out with her also, except for the fact that Doombots were important to the script, and I have always liked Kristoff as a character.

Kristoff’s cheapness aside…America turned a corner in the last few weeks, into the world of comics and science fiction. By that, I mean that Real World America took a couple of sudden, sharp steps into the kind of thing that I expect in comic books. Suddenly, America seems to have a bit of a Robot Problem.

Two examples of robots directly resulting in human death have been in the media recently…the first being a Tesla Driver who had his Model S on autopilot mode, which in turn failed to recognize an 18 wheel truck crossing in front of it. The second, of course, was in Dallas, where police used a bomb disposal robot to actually deliver a bomb to an admittedly awful person. In both instances, the media has focused on the implications of the robots, and not a key fact: in both cases, there was still a kind of human operator.

Starting with the Tesla (which is why the are has a robot driving a car…see?), the Model S is designed to drive itself in autopilot mode, but you aren’t supposed to just IGNORE it. In fact,tThe Tesla driver killed in the first known fatal crash involving a self-driving car was watching a Harry Potter movie at the time of the collision. The truck driver involved told reporters that the Tesla driver was “playing Harry Potter on the TV screen” during the collision and that “he went so fast through my trailer I didn’t see him”.

It’s hard to grasp what happened here, so I’ll clarify as best I can. On May 7th (news of this was released later, admirably by Tesla) a Model S was put into Tesla’s autopilot mode, which is able to control a car while it’s driving on the highway. Seems reasonable enough. Here’s where it gets wonky. On investigation, the car’s sensor system (against a bright spring sky) failed to distinguish a large white 18-wheel truck and trailer crossing the highway. In a blogpost, Tesla said the self-driving car attempted to drive full speed under the trailer “with the bottom of the trailer impacting the windshield of the Model S”.

That’s a pretty big expert system failure. Still…the truck driver said that the portable DVD player was still playing Harry Potter, loudly, after the accident.

My point? If the human operator was paying a little attention, he could have stomped the brakes, and saved his own life. That’s kind of Tesla’s point too…it designed the autopilot system to periodically “nudge” the driver, so that they continue to pay close attention to the developing road situation, as a sort of check and balance.

Elon Musk, the CEO of Tesla, tweeted his condolences regarding the “tragic loss”, but the company’s statement deflected blame for the crash. His 537-word statement noted that this was Tesla’s first known autopilot death in roughly 130m miles driven by customers.

“Among all vehicles in the US, there is a fatality every 94 million miles,” the statement said. So, to be clear….Elan Musk is saying that despite an accident that removed the top of an entire car at freeway speed, that could have been avoided….Tesla autopilot is still substantially safer than human operators. Seems insensitive to turn that stat into a feature.

It goes on to say that the car’s autonomous software is designed to nudge consumers to keep their hands on the wheels to make sure they’re paying attention. “Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” the company said.

The driver, who owned a technology company called Nexu Innovation, was a Tesla enthusiast who posted videos of his car on autopilot on YouTube. One of them ironically showed his vehicle avoiding a crash on the highway. The footage racked up 1m views after Musk tweeted it. One of his first videos appeared to show him temporarily driving with no hands in slow-moving traffic. The Associated Press also reported that records show he received eight speeding tickets in six years. It’s hard to say here if autopilot was a good thing for this man. On the one hand, the Tesla autopilot wasn’t going to rack up speeding tickets. On the other hand…taking hands off the wheel and watching a film is pushing the technology to its limits, which is arguably just as unsafe as speeding.

As the decision making abilities of self driving cars are refined, ethical problems are drawn into the situation. If an accident were about to happen, which would kill say…three pedestrians, then most coders are building algorithms that sacrifice the car and its driver/passenger in a utilitarian solution. Americans surveyed in focus groups agreed with this, saying almost universally that it is almost better to save more lives. When then asked if they would buy a self driving car programmed in this way, almost all of them said a firm “no.”

Clearly, self driving cars and the ethics attached to them are a nascent technology. One might suggest that some version of Isaac Asimov’s Three Laws of Robotics are a guiding principle for the autonomous driving machines. If you don’t know what the Three Laws are…have a peek:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

So perhaps a self driving car that stops driving and pulls over, if it detects something that suggests you aren’t paying attention. That would be using the Second Law to enact the First. It can’t be that bad an idea, and to be honest…Asimov himself suggested that Humans should also follow the Laws of Robotics. He said on the subject:

“I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior.

My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else. –—But when I say that, I always remember (sadly) that human beings are not always rational.”

This post is already super long, and I still haven’t gotten to the Dallas Robot. The facts of that case are pretty well known…the Dallas sniper who killed five police officers was cornered, in a standoff with police. The police on scene took their bomb disposal robot, and had it deliver a lethal explosive charge to the sniper, killing him. The goal was to minimize further loss of life, with a criminal who was established as willing to kill law enforcers. In many ways, the drone seems to have been a very reasonable choice.

Still…at breakfast on a Sunday, a friend of mine was outspoken on how using a “killer robot” was a major ethical shift that we should all be concerned about. When said like that, it does sound mo0re like the kind of thing doctor Doom should be doing, than a local American police force.

However, the ethics on this are not as interesting as the Tesla autopilot. The Dallas robot had a direct human controller, a human operator. It was not any different, in a real sense, than SWAT shooting the sniper at a distance, with the planning and prep that goes into that. The robot here is more of a tool…like a hammer. You can build a house with a hammer, you can kill someone with it. This robot can dispose of a bomb, or deliver it. It’s all about the human application of what is in fact, and impressively complex tool.

So then, that makes this a matter of telepresence in the action. What are the ethical, and possibly legal implications of operating a device to do what a human might usually do, in this case, police operations in a tense stand off. There are actually an increasing number of “robot cases” in the law, occasionally people will do things with robots that usually people do, and then courts have to decide what the legal effects are.

One of the case involved was the use of an unmanned submarine to “discover” a shipwreck. Usually, in order for you to get salvage rights in maritime law, you had to physically go down to the shipwreck and pull some of it up. that’s because you would need to use divers, or some sort of sub with human drivers inside it. But in this case, the salvage company had only reached it through a tele-operated robot. The court had to decide: does that count as exclusive possession for the purpose of maritime law? to deal with it they created a new doctrine called “tele-possession.” So basically what happened is the law thinks about what applies to people, and whether the use of telepresence counts in that situation. Follow so far?

In this situation [referring to Dallas incident] I think that the court would probably do very much the same thing. Could an officer walk in there and shoot that person? Yes, of course. Could a robot be sent in to shoot the person or blow them up? Yes. As a result, that’s not an answer to the question of whether or not there should be careful policies in place around robotics in police operations and dangerous standoffs…given that less people are put at risk using a robot, there should be. But…the newness an uniqueness of the situation, coupled with the use of force, seem to make it a matter of real concern, despite the fact that if I think it through, I see the reasoning.

Except…we accept the use of lethal force by police officers when their own lives are at risk. Not at other times. The robot/drone was being used…taking human officers out of the risk equation…and as a result, removing the justification for lethal force. If the lives of officers are not at risk, we don’t give them a lethal response to criminals, and for good reason. Here…the robot removed the risk, so the force seems…excessive. It could be argued that the robot had only limited capabilities, and was being used as a preventive….

…but I don’t like the idea of “preventing” people from doing crimes that they might do, that they have been profiled to do by prior acts…before they do them, and with lethal force. Ironically, that’s actually the plot of Marvel Comics’ Civil War II, Issue No. 3. People have to do things before you punish them…especially something that you can’t take back, like lethal force.

Being very fair to the police here…the shooter had holed up in a downtown Dallas building and a firefight ensued after hours of negotiations between the suspect and police failed.

“When all attempts to negotiate with the suspect…failed under the exchange of gunfire, the department utilized the mechanical tactical robot, as a last resort, to deliver an explosion device to save the lives of officers and citizens,” the Dallas Police Department explained in a blog post earlier this month. Arguably, the robot was deployed in a situation that was a danger to any police officer that might step in to deal with it, and any nearby civilians.

Still…when Ohio Police are showing in public statements that they too have a bomb disposal robot to ensure the security of the Republican National Convention…it makes me wonder if my friend is right, and we have turned some sort of ethical corner about human life. According to public records, more than 20 robots similar to the one in Dallas have been transferred to local law-enforcement agencies in Ohio, including at least three to the State Highway Patrol. Many are on loan from the various Homeland Security agencies attending, with around 3000 human agents.

The robots have come through a program called “1033” and others like it, which allow for unneeded military equipment to be donated, sold or otherwise transferred to law-enforcement agencies. According to the “Center for the Study of the Drone,” Ohio is among the top states to receive former military robots, with only California receiving more transfers. Considering that I live in California, that’s not the best news I have ever heard.

That’s how Doctor Doom rolls, after all. He maintains his absolute monarchy through an army of Doombots that are armed like him, and have his appearance, doing all of the day to day police work and law enforcement. As we debate the idea of the militarization of law enforcement and the use of drones in police work, I would like to go on record that we should avoid a Doombot mentality. Just saying.

It’s worth repeating that these robots are not designed to be weapons, but rather to search for and dispose of bombs and other threats. Cleveland Police say that’s how they plan to use the robots at the Republican convention, but some still worry that a deadly genie was let out of a bottle in Dallas and like most genies, will try to stay out of the bottle.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: