/ / Articles, News


My new Guardian column is “Why it is not possible to regulate robots,” which discusses where and how robots can be regulated, and whether there is any sensible ground for “robot law” as distinct from “computer law.”


One thing that is glaringly absent from both the Heinleinian and Asimovian brain is the idea of software as an immaterial, infinitely reproducible nugget at the core of the system. Here, in the second decade of the 21st century, it seems to me that the most important fact about a robot – whether it is self-aware or merely autonomous – is the operating system, configuration, and code running on it.

If you accept that robots are just machines – no different in principle from sewing machines, cars, or shotguns – and that the thing that makes them “robot” is the software that runs on a general-purpose computer that controls them, then all the legislative and regulatory and normative problems of robots start to become a subset of the problems of networks and computers.

If you’re a regular reader, you’ll know that I believe two things about computers: first, that they are the most significant functional element of most modern artifacts, from cars to houses to hearing aids; and second, that we have dramatically failed to come to grips with this fact. We keep talking about whether 3D printers should be “allowed” to print guns, or whether computers should be “allowed” to make infringing copies, or whether your iPhone should be “allowed” to run software that Apple hasn’t approved and put in its App Store.

Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can’t.

Why it is not possible to regulate robots

/ / Articles, News, Podcast

Here’s a reading (MP3) of a my November, 2013 Locus column, Collective Action, in which I propose an Internet-enabled “Magnificent Seven” business model for foiling corruption, especially copyright- and patent-trolling. In this model, victims of extortionists find each other on the Internet and pledge to divert a year’s worth of “license fees” to a collective defense fund that will be used to invalidate a patent or prove that a controversial copyright has lapsed. The name comes from the classic film The Magnificent Seven (based, in turn, on Akira Kurosawa’s Seven Samurai) in which villagers decide one year to take the money they’d normally give to the bandits, and turn it over to mercenaries who kill the bandits.

Why has Warner gotten away with its theft of ‘‘Happy Birthday’’ for so long? Because the interests of all the people who pay the license fee are diffused, and Warner’s interests are concentrated. For any one licensor, the rational course of action is paying Warner, rather than fighting in court. For Warner, the rational course is fighting in court, every time.

In this regard, Warner is in the same position as copyright and patent trolls: the interests of the troll are concentrated. Their optimal strategy is to fight back when pushed. But it’s the reverse for their victims: the best thing for them to do is to settle.

Collectively, though, the victims are always out more than the cost of a defense. That is, all the money made by a troll from a single stupid patent is much more than the cost of fighting to get the patent invalidated. All the money made by Warner on ‘‘Happy Birthday’’ dwarfs the expense of proving, in court, that they weren’t entitled to any of it.

The reason the victims don’t get together to fight back is that they don’t know each other and have no way to coordinate among each other. In economists’ jargon, they have a ‘‘collective action problem.’’

Mastering by John Taylor Williams: wryneckstudio@gmail.com

John Taylor Williams is a audiovisual and multimedia producer based in Washington, DC and the co-host of the Living Proof Brew Cast. Hear him wax poetic over a pint or two of beer by visiting livingproofbrewcast.com. In his free time he makes “Beer Jewelry” and “Odd Musical Furniture.” He often “meditates while reading cookbooks.”

MP3

/ / Articles, News, Podcast

What happens with digital rights management in the real world?
Podcast: What happens with digital rights management in the real world?

Here’s a reading (MP3) of a recent Guardian column, What happens with digital rights management in the real world where I attempt to explain the technological realpolitik of DRM, which has nothing much to do with copyright, and everything to do with Internet security.

The entertainment industry calls DRM “security” software, because it makes them secure from their customers. Security is not a matter of abstract absolutes, it requires a context. You can’t be “secure,” generally — you can only be secure from some risk. For example, having food makes you secure from hunger, but puts you at risk from obesity-related illness.

DRM is designed on the presumption that users don’t want it, and if they could turn it off, they would. You only need DRM to stop users from doing things they’re trying to do and want to do. If the thing the DRM restricts is something no one wants to do anyway, you don’t need the DRM. You don’t need a lock on a door that no one ever wants to open.

DRM assumes that the computer’s owner is its adversary. For DRM to work, there has to be no obvious way to remove, interrupt or fool it. For DRM to work, it has to reside in a computer whose operating system is designed to obfuscate some of its files and processes: to deliberately hoodwink the computer’s owner about what the computer is doing. If you ask your computer to list all the running programs, it has to hide the DRM program from you. If you ask it to show you the files, it has to hide the DRM files from you. Anything less and you, as the computer’s owner, would kill the program and delete its associated files at the first sign of trouble.

An increase in the security of the companies you buy your media from means a decrease in your own security. When your computer is designed to treat you as an untrusted party, you are at serious risk: anyone who can put malicious software on your computer has only to take advantage of your computer’s intentional capacity to disguise its operation from you in order to make it much harder for you to know when and how you’ve been compromised.

Mastering by John Taylor Williams: wryneckstudio@gmail.com

John Taylor Williams is a audiovisual and multimedia producer based in Washington, DC and the co-host of the Living Proof Brew Cast. Hear him wax poetic over a pint or two of beer by visiting livingproofbrewcast.com. In his free time he makes “Beer Jewelry” and “Odd Musical Furniture.” He often “meditates while reading cookbooks.”

MP3

/ / Articles, News


In my latest Guardian column, If GCHQ wants to improve national security it must fix our technology, I argue that computer security isn’t really an engineering issue, it’s a public health issue. As with public health, it’s more important to be sure that our pathogens are disclosed, understood and disclosed than it is to keep them secret so we can use them against our enemies.

Scientists formulate theories that they attempt to prove through experiments that are reviewed by peers, who attempt to spot flaws in the reasoning and methodology. Scientific theories are in a state of continuous, tumultuous improvement as old ideas are overturned in part or whole, and replaced with new ones.

Security is science on meth. There is a bedrock of security that is considered relatively stable – the mathematics of scrambling and descrambling messages – but everything above that bedrock has all the stability of a half-set custard. That is, the best way to use those stable, well-validated algorithms is mostly up for grabs, as the complex interplay of incompatible systems, human error, legacy systems, regulations, laziness, recklessness, naivete, adversarial cunning and perverse commercial incentives all jumble together in ways that open the American retailer Target to the loss of 100m credit card numbers, and the whole internet to GCHQ spying.

As Schneier says: “Anyone can design a security system that works so well that he can’t figure out how to break it.” That is to say, your best effort at security is, by definition, only secure against people who are at least as dumb as you are. Unless you happen to be the smartest person in the world, you need to subject your security system to the kind of scrutiny that scientists use to validate their theories, and be prepared to incrementally patch and refactor things as new errors are discovered and reported

If GCHQ wants to improve national security it must fix our technology

(Image: File:CoughsAndSneezesSpreadDiseases.jpg, Wikimedia Commons, Public Domain)

/ / Articles, News


My latest Locus column is “Cold Equations and Moral Hazard”, an essay about the way that our narratives about the future can pave the way for bad people to create, and benefit from, disasters. “If being in a lifeboat gives you the power to make everyone else shut the hell up and listen (or else), then wouldn’t it be awfully convenient if our ship were to go down?”

Apparently, editor John W. Campbell sent back three rewrites in which the pilot figured out how to save the girl. He was adamant that the universe must punish the girl.

The universe wasn’t punishing the girl, though. Godwin was – and so was Barton (albeit reluctantly).

The parameters of ‘‘The Cold Equations’’ are not the inescapable laws of physics. Zoom out beyond the page’s edges and you’ll find the author’s hands carefully arranging the scenery so that the plague, the world, the fuel, the girl and the pilot are all poised to inevitably lead to her execution. The author, not the girl, decided that there was no autopilot that could land the ship without the pilot. The author decided that the plague was fatal to all concerned, and that the vaccine needed to be delivered within a timeframe that could only be attained through the execution of the stowaway.

It is, then, a contrivance. A circumstance engineered for a justifiable murder. An elaborate shell game that makes the poor pilot – and the company he serves – into victims every bit as much as the dead girl is a victim, forced by circumstance and girlish naïveté to stain their souls with murder.

Moral hazard is the economist’s term for a rule that encourages people to behave badly. For example, a rule that says that you’re not liable for your factory’s pollution if you don’t know about it encourages factory owners to totally ignore their effluent pipes – it turns willful ignorance into a profitable strategy.

Cold Equations and Moral Hazard

/ / Articles, News

Why DRM is the root of all evil

In my latest Guardian column, What happens with digital rights management in the real world?, I explain why the most important fact about DRM is how it relates to security and disclosure, and not how it relates to fair use and copyright. Most importantly, I propose a shortcut through DRM reform, through a carefully designed legal test-case.

The DMCA is a long and complex instrument, but what I’m talking about here is section 1201: the notorious “anti-circumvention” provisions. They make it illegal to circumvent an “effective means of access control” that restricts a copyrighted work. The companies that make DRM and the courts have interpreted this very broadly, enjoining people from publishing information about vulnerabilities in DRM, from publishing the secret keys hidden in the DRM, from publishing instructions for getting around the DRM – basically, anything that could conceivably give aid and comfort to someone who wanted to do something that the manufacturer or the copyright holder forbade.

Significantly, in 2000, a US appeals court found (in Universal City Studios, Inc v Reimerdes) that breaking DRM was illegal, even if you were trying to do something that would otherwise be legal. In other words, if your ebook has a restriction that stops you reading it on Wednesdays, you can’t break that restriction, even if it would be otherwise legal to read the book on Wednesdays.

In the USA, the First Amendment of the Constitution gives broad protection to free expression, and prohibits government from making laws that abridge Americans’ free speech rights. Here, the Reimerdes case set another bad precedent: it moved computer code from the realm of protected expression into a kind of grey-zone where it may or may not be protected.

In 1997’s Bernstein v United States, another US appeals court found that code was protected expression. Bernstein was a turning point in the history of computers and the law: it concerned itself with a UC Berkeley mathematician named Daniel Bernstein who challenged the American prohibition on producing cryptographic tools that could scramble messages with such efficiency that the police could not unscramble them. The US National Security Agency (NSA) called such programs “munitions” and severely restricted their use and publication. Bernstein published his encryption programs on the internet, and successfully defended his right to do so by citing the First Amendment. When the appellate court agreed, the NSA’s ability to control civilian use of strong cryptography was destroyed. Ever since, our computers have had the power to keep secrets that none may extract except with our permission – that’s why the NSA and GCHQ’s secret anti-security initiatives, Bullrun and Edgehill, targetted vulnerabilities in operating systems, programs, and hardware. They couldn’t defeat the maths (they also tried to subvert the maths, getting the US National Institute for Standards in Technology to adopt a weak algorithm for producing random numbers).

What happens with digital rights management in the real world?

/ / Articles, News

My latest Guardian column, “Digital failures are inevitable, but we need them to be graceful,” talks about evaluating technology based on more than its features — rather, on how you relate to it, and how it relates to you. In particular, I try to make the case for giving especial care to what happens when your technology fails:


Graceful failure is so much more important than fleeting success, but it’s not a feature or a design spec. Rather, it’s a relationship that I have with the technology I use and the systems that are used to produce it.

This is not asceticism. Advocates of software freedom are sometimes accused of elevating ideology over utility. But I use the software I do out of a purely instrumental impulse. The things I do with my computer are the soul of my creative, professional, and personal life. My computer has videos and stills and audio of my daughter’s early life, rare moments of candid memoir from my grandmothers, the precious love letters that my wife and I sent to one another when we courted, the stories I’ve poured my heart and soul into, the confidential and highly sensitive whistleblower emails I’ve gotten from secret sources on investigative pieces; the privileged internal communications of the Electronic Frontier Foundation, a law office to whom I have a duty of care as part of my fellowship (and everything else besides).

Knowing that I can work with this stuff in a way that works is simply not enough. I need to know that when my computer breaks, when the software is discontinued, when my computer is lost or stolen, when a service provider goes bust or changes ownership and goes toxic, when a customs officer duplicates my hard-drive at border, when my survivors attempt to probate my data – when all of that inevitable stuff happens, that my digital life will be saved. That data that should remain confidential will not leak. That data that should be preserved will be. That files that should be accessible can be accessed, without heroic measures to run obsolete software on painstakingly maintained superannuated hardware.

Digital failures are inevitable, but we need them to be graceful

(Image: Smashed, a Creative Commons Attribution (2.0) image from sarahbaker’s photostream)

/ / Articles, News

In my latest Locus column, “Cheap Writing Tricks,” I ruminate on what makes fiction work — why we perceive stories as stories, why we care about characters, and how the construction of stories interacts with the human mind (and why How to Win Friends and Influence People is a great writing tool).

more

/ / Articles, News


In my latest Guardian column, I explain how UK prime minister David Cameron’s plan to opt the entire nation into a programme of Internet censorship is the worst of all worlds for kids and their parents. Cameron’s version of the Iranian “Halal Internet” can’t possibly filter out all the bad stuff, nor can it avoid falsely catching good stuff we want our kids to see (already the filters are blocking websites about sexual health and dealing with “porn addiction”). That means that our kids will still end up seeing stuff they shouldn’t, but that we parents won’t be prepared for it, thanks to the false sense of security we get from the filters.
more