Cory Doctorow's


RSS feed for comments on this post.

The URL to TrackBack this entry is:

  1. Calo's paper isn't currently available for download, so I don't know whether he covers these issues, but a significant concern amongst roboticists at the moment is the idea of "embodiment" - the idea that the kind of body you have not only plays a role in the kind of mind that you need and have, but is partly constitutive of what your mind is and how you think.

    Within Robotics this issue is probably best known from Rodney Brooks's work on behaviour-based robotics, but the ideas have a much wider provinence. The suspicion is that Boston Dynamics's work is strongly driven but such considerations, for instance (difficult to know for sure).

    All of which is prelude to the suggestion that the more general the robot's capabilities (situations in which it can work, kinds of jobs that it can do) the more specific its control structures need to be and the less the idea of a general purpose computer works as a system that controls it - you actually get the physical structure of the system to do a lot of the computational work. So I think there will be a difference between computers and robots, but it will be much closer to the distinction we currently have between robots and androids (in the full-on sci-fi sense).
    (Disclaimer: I'm somewhat familiar with the Robotics issues, but I'm not myself a roboticist. A nice introduction to the embodiment issue here is this: )

    Comment by Marek — April 2, 2014 @ 4:04 am

  2. In the Asimov brain, the 3 rules are part of the matrix — the robot brain doesn’t work at all without them, the positronic brain doesn’t work.

    But of course, this is essentially the argument behind DRM, or TrustedComputing. If someone owns the hardware (their robot), do they own it, or a license to it? I can mostly load whatever I want on an Android phone, but not an Apple device.

    If a robot comes with the same set of conditions, closed source, and with some ability to protect itself, as people protect themselves from undesired modifications, could robot systems be sufficiently sealed to prevent modification?

    Comment by james — April 2, 2014 @ 3:04 pm

  3. But how do people protect themselves from undesired modifications, James? Once under the anaesthetic one is unlikely to know just what, exactly, is being added or subtracted.
    Also, robots, by their own definition, are essentially machine, whereas the avatar takes the fast route, like a credit card slipping a lock between the 'real' and 'virtual' worlds, which these days lack credible boundaries. So it is such a good question, whether robots could be sealed and whether there could be laws against hacking them, as this leads into the question of whether there should be laws against hacking human brains which has not been addressed yet.

    Comment by jane — April 9, 2014 @ 11:31 am

Leave a comment

Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Close this window.

1.791 Powered by WordPress