Why I don’t believe in robots

My new Guardian column is "Why it is not possible to regulate robots," which discusses where and how robots can be regulated, and whether there is any sensible ground for "robot law" as distinct from "computer law."

One thing that is glaringly absent from both the Heinleinian and Asimovian brain is the idea of software as an immaterial, infinitely reproducible nugget at the core of the system. Here, in the second decade of the 21st century, it seems to me that the most important fact about a robot – whether it is self-aware or merely autonomous – is the operating system, configuration, and code running on it.

If you accept that robots are just machines – no different in principle from sewing machines, cars, or shotguns – and that the thing that makes them "robot" is the software that runs on a general-purpose computer that controls them, then all the legislative and regulatory and normative problems of robots start to become a subset of the problems of networks and computers.

If you're a regular reader, you'll know that I believe two things about computers: first, that they are the most significant functional element of most modern artifacts, from cars to houses to hearing aids; and second, that we have dramatically failed to come to grips with this fact. We keep talking about whether 3D printers should be "allowed" to print guns, or whether computers should be "allowed" to make infringing copies, or whether your iPhone should be "allowed" to run software that Apple hasn't approved and put in its App Store.

Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can't.

Why it is not possible to regulate robots

3 Responses to “Why I don’t believe in robots”

  1. Marek says:

    Calo's paper isn't currently available for download, so I don't know whether he covers these issues, but a significant concern amongst roboticists at the moment is the idea of "embodiment" - the idea that the kind of body you have not only plays a role in the kind of mind that you need and have, but is partly constitutive of what your mind is and how you think.

    Within Robotics this issue is probably best known from Rodney Brooks's work on behaviour-based robotics, but the ideas have a much wider provinence. The suspicion is that Boston Dynamics's work is strongly driven but such considerations, for instance (difficult to know for sure).

    All of which is prelude to the suggestion that the more general the robot's capabilities (situations in which it can work, kinds of jobs that it can do) the more specific its control structures need to be and the less the idea of a general purpose computer works as a system that controls it - you actually get the physical structure of the system to do a lot of the computational work. So I think there will be a difference between computers and robots, but it will be much closer to the distinction we currently have between robots and androids (in the full-on sci-fi sense).
    (Disclaimer: I'm somewhat familiar with the Robotics issues, but I'm not myself a roboticist. A nice introduction to the embodiment issue here is this: http://mitpress.mit.edu/books/how-body-shapes-way-we-think )

  2. james says:

    In the Asimov brain, the 3 rules are part of the matrix — the robot brain doesn’t work at all without them, the positronic brain doesn’t work.

    But of course, this is essentially the argument behind DRM, or TrustedComputing. If someone owns the hardware (their robot), do they own it, or a license to it? I can mostly load whatever I want on an Android phone, but not an Apple device.

    If a robot comes with the same set of conditions, closed source, and with some ability to protect itself, as people protect themselves from undesired modifications, could robot systems be sufficiently sealed to prevent modification?

  3. jane says:

    But how do people protect themselves from undesired modifications, James? Once under the anaesthetic one is unlikely to know just what, exactly, is being added or subtracted.
    Also, robots, by their own definition, are essentially machine, whereas the avatar takes the fast route, like a credit card slipping a lock between the 'real' and 'virtual' worlds, which these days lack credible boundaries. So it is such a good question, whether robots could be sealed and whether there could be laws against hacking them, as this leads into the question of whether there should be laws against hacking human brains which has not been addressed yet.

Leave a Reply

Creative Commons License

Cory Doctorow’s craphound.com is proudly powered by WordPress
Entries (RSS) and Comments (RSS).