/ / Articles, News


My latest Locus column is “Cold Equations and Moral Hazard”, an essay about the way that our narratives about the future can pave the way for bad people to create, and benefit from, disasters. “If being in a lifeboat gives you the power to make everyone else shut the hell up and listen (or else), then wouldn’t it be awfully convenient if our ship were to go down?”

Apparently, editor John W. Campbell sent back three rewrites in which the pilot figured out how to save the girl. He was adamant that the universe must punish the girl.

The universe wasn’t punishing the girl, though. Godwin was – and so was Barton (albeit reluctantly).

The parameters of ‘‘The Cold Equations’’ are not the inescapable laws of physics. Zoom out beyond the page’s edges and you’ll find the author’s hands carefully arranging the scenery so that the plague, the world, the fuel, the girl and the pilot are all poised to inevitably lead to her execution. The author, not the girl, decided that there was no autopilot that could land the ship without the pilot. The author decided that the plague was fatal to all concerned, and that the vaccine needed to be delivered within a timeframe that could only be attained through the execution of the stowaway.

It is, then, a contrivance. A circumstance engineered for a justifiable murder. An elaborate shell game that makes the poor pilot – and the company he serves – into victims every bit as much as the dead girl is a victim, forced by circumstance and girlish naïveté to stain their souls with murder.

Moral hazard is the economist’s term for a rule that encourages people to behave badly. For example, a rule that says that you’re not liable for your factory’s pollution if you don’t know about it encourages factory owners to totally ignore their effluent pipes – it turns willful ignorance into a profitable strategy.

Cold Equations and Moral Hazard

/ / Articles, News

Why DRM is the root of all evil

In my latest Guardian column, What happens with digital rights management in the real world?, I explain why the most important fact about DRM is how it relates to security and disclosure, and not how it relates to fair use and copyright. Most importantly, I propose a shortcut through DRM reform, through a carefully designed legal test-case.

The DMCA is a long and complex instrument, but what I’m talking about here is section 1201: the notorious “anti-circumvention” provisions. They make it illegal to circumvent an “effective means of access control” that restricts a copyrighted work. The companies that make DRM and the courts have interpreted this very broadly, enjoining people from publishing information about vulnerabilities in DRM, from publishing the secret keys hidden in the DRM, from publishing instructions for getting around the DRM – basically, anything that could conceivably give aid and comfort to someone who wanted to do something that the manufacturer or the copyright holder forbade.

Significantly, in 2000, a US appeals court found (in Universal City Studios, Inc v Reimerdes) that breaking DRM was illegal, even if you were trying to do something that would otherwise be legal. In other words, if your ebook has a restriction that stops you reading it on Wednesdays, you can’t break that restriction, even if it would be otherwise legal to read the book on Wednesdays.

In the USA, the First Amendment of the Constitution gives broad protection to free expression, and prohibits government from making laws that abridge Americans’ free speech rights. Here, the Reimerdes case set another bad precedent: it moved computer code from the realm of protected expression into a kind of grey-zone where it may or may not be protected.

In 1997’s Bernstein v United States, another US appeals court found that code was protected expression. Bernstein was a turning point in the history of computers and the law: it concerned itself with a UC Berkeley mathematician named Daniel Bernstein who challenged the American prohibition on producing cryptographic tools that could scramble messages with such efficiency that the police could not unscramble them. The US National Security Agency (NSA) called such programs “munitions” and severely restricted their use and publication. Bernstein published his encryption programs on the internet, and successfully defended his right to do so by citing the First Amendment. When the appellate court agreed, the NSA’s ability to control civilian use of strong cryptography was destroyed. Ever since, our computers have had the power to keep secrets that none may extract except with our permission – that’s why the NSA and GCHQ’s secret anti-security initiatives, Bullrun and Edgehill, targetted vulnerabilities in operating systems, programs, and hardware. They couldn’t defeat the maths (they also tried to subvert the maths, getting the US National Institute for Standards in Technology to adopt a weak algorithm for producing random numbers).

What happens with digital rights management in the real world?

/ / Articles, News

My latest Guardian column, “Digital failures are inevitable, but we need them to be graceful,” talks about evaluating technology based on more than its features — rather, on how you relate to it, and how it relates to you. In particular, I try to make the case for giving especial care to what happens when your technology fails:


Graceful failure is so much more important than fleeting success, but it’s not a feature or a design spec. Rather, it’s a relationship that I have with the technology I use and the systems that are used to produce it.

This is not asceticism. Advocates of software freedom are sometimes accused of elevating ideology over utility. But I use the software I do out of a purely instrumental impulse. The things I do with my computer are the soul of my creative, professional, and personal life. My computer has videos and stills and audio of my daughter’s early life, rare moments of candid memoir from my grandmothers, the precious love letters that my wife and I sent to one another when we courted, the stories I’ve poured my heart and soul into, the confidential and highly sensitive whistleblower emails I’ve gotten from secret sources on investigative pieces; the privileged internal communications of the Electronic Frontier Foundation, a law office to whom I have a duty of care as part of my fellowship (and everything else besides).

Knowing that I can work with this stuff in a way that works is simply not enough. I need to know that when my computer breaks, when the software is discontinued, when my computer is lost or stolen, when a service provider goes bust or changes ownership and goes toxic, when a customs officer duplicates my hard-drive at border, when my survivors attempt to probate my data – when all of that inevitable stuff happens, that my digital life will be saved. That data that should remain confidential will not leak. That data that should be preserved will be. That files that should be accessible can be accessed, without heroic measures to run obsolete software on painstakingly maintained superannuated hardware.

Digital failures are inevitable, but we need them to be graceful

(Image: Smashed, a Creative Commons Attribution (2.0) image from sarahbaker’s photostream)

/ / Articles, News

In my latest Locus column, “Cheap Writing Tricks,” I ruminate on what makes fiction work — why we perceive stories as stories, why we care about characters, and how the construction of stories interacts with the human mind (and why How to Win Friends and Influence People is a great writing tool).

more

/ / Articles, News


In my latest Guardian column, I explain how UK prime minister David Cameron’s plan to opt the entire nation into a programme of Internet censorship is the worst of all worlds for kids and their parents. Cameron’s version of the Iranian “Halal Internet” can’t possibly filter out all the bad stuff, nor can it avoid falsely catching good stuff we want our kids to see (already the filters are blocking websites about sexual health and dealing with “porn addiction”). That means that our kids will still end up seeing stuff they shouldn’t, but that we parents won’t be prepared for it, thanks to the false sense of security we get from the filters.
more

/ / Articles, News


In my latest Guardian column, I suggest that we have reached “peak indifference to spying,” the turning point at which the number of people alarmed by surveillance will only grow. It’s not the end of surveillance, it’s not even the beginning of the end of surveillance, but it’s the beginning of the beginning of the end of surveillance.

We have reached the moment after which the number of people who give a damn about their privacy will only increase. The number of people who are so unaware of their privilege or blind to their risk that they think “nothing to hide/nothing to fear” is a viable way to run a civilisation will only decline from here on in.

And that is the beginning of a significant change.

Like all security, privacy is hard. It requires subtle thinking, and the conjunction of law, markets, technology and norms to get right. All four of those factors have been sorely lacking.

The default posture of our devices and software has been to haemorrhage our most sensitive data for anyone who cared to eavesdrop upon them. The default posture of law – fuelled by an unholy confluence of Big Data business models and Greater Manure Pile surveillance – has been to allow for nearly unfettered collection by spies, companies, and companies that provide data to spies. The privacy norm has been all over the place, but mostly dominated by nothing-to-hide. And thanks to the norm, the market for privacy technology has been nearly nonexistent – people with “nothing to fear” won’t pay a penny extra for privacy technology.

We cannot afford to be indifferent to internet spying

(Image: Anonymity, Privacy, and Security Online/Pew Center)

/ / Articles, News


My new Locus column, Collective Action, proposes a theory of corruption: the relatively small profits from being a jerk are concentrated, the much larger effects are diffused, which means that the jerks can afford better lawyers and lobbyists than any one of their victims. Since the victims are spread out and don’t know each other, it’s hard to fight back together.

Then I propose a solution: using Kickstarter-like mechanisms to fight corruption: a website where victims of everything from patent trolls and copyright trolls, all the way up to pollution and robo-signing foreclosures, can find each other and pledge to fund a group defense, rather than paying off the bandits.

It’s the Magnificent Seven business model: one year, the villagers stop paying the robbers, and use the money to pay mercenaries to fight the robbers instead.

more