/ / Articles, News


In my latest Guardian column, The problem with self-driving cars: who controls the code?, I take issue with the “Trolley Problem” as applied to autonomous vehicles, which asks, if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do?

The problem with this formulation of the problem is that it misses the big question that underpins it: if your car was programmed to kill you under normal circumstances, how would the manufacturer stop you from changing its programming so that your car never killed you?

There’s a strong argument for this. The programming in autonomous vehicles will be in charge of a high-speed, moving object that inhabits public roads, amid soft and fragile humans. Tinker with your car’s brains? Why not perform amateur brain surgery on yourself first?

But this obvious answer has an obvious problem: it doesn’t work. Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn’t to keep people from reconfiguring their devices – it’s just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn’t hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it’s clearly legal, you can have your phone unlocked at the newsagent’s or even the dry-cleaner’s.

If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.


The problem with self-driving cars: who controls the code?
[Cory Doctorow/The Guardian]