/ / Articles, News


In my latest Guardian column, The problem with self-driving cars: who controls the code?, I take issue with the “Trolley Problem” as applied to autonomous vehicles, which asks, if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do?

The problem with this formulation of the problem is that it misses the big question that underpins it: if your car was programmed to kill you under normal circumstances, how would the manufacturer stop you from changing its programming so that your car never killed you?

There’s a strong argument for this. The programming in autonomous vehicles will be in charge of a high-speed, moving object that inhabits public roads, amid soft and fragile humans. Tinker with your car’s brains? Why not perform amateur brain surgery on yourself first?

But this obvious answer has an obvious problem: it doesn’t work. Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn’t to keep people from reconfiguring their devices – it’s just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn’t hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it’s clearly legal, you can have your phone unlocked at the newsagent’s or even the dry-cleaner’s.

If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.


The problem with self-driving cars: who controls the code?
[Cory Doctorow/The Guardian]

2 Responses to “If you think self-driving cars have a Trolley Problem, you’re asking the wrong questions”

  1. Ken Brown

    It’s also scary to think about trusting your life to a machine that will come equipped with a wi-fi link that may not be able to be shut off so it can receive updates as they become available. A group that can bypass any security layer installed could cripple a large city by “updating” a small number of cars on a major highway during rush hour.

  2. mkzero

    “A group that can bypass any security layer installed could cripple a large city by “updating” a small number of cars on a major highway during rush hour.”

    You don’t need high-tec for that. Just throw a few nails on the highway and your good to go.

    Typical “normal” thinking. Sure, you could bypass all those sec layers, it’s never impossible. But smart people grasp for the low hanging fruit. Also, most cars nowadays come with wifi, bluetooth, maybe even 3/4G. already equipped – if you want to get in, there’s enough surface to attack. The question is, why the old car manufacturers don’t secure this by isolating those layers. Your engine probably won’t need any internet uplink and your sound system probably shouldn’t be controlling your breaks..

    But back to the assumption a car had to decide to kill someone: The probability it won’t have to is a lot smaller. And with enough autonomous cars it’s probably about 0 – as long as you don’t fuck up the firmware, the car’s faster in responding, won’t drive under the influence, won’t break speed limits or do any other stupid thing humans usually do when they produce accidents.

Leave a Reply