What happens with digital rights management in the real world?
Podcast: What happens with digital rights management in the real world?
Here’s a reading (MP3) of a recent Guardian column, What happens with digital rights management in the real world where I attempt to explain the technological realpolitik of DRM, which has nothing much to do with copyright, and everything to do with Internet security.
The entertainment industry calls DRM “security” software, because it makes them secure from their customers. Security is not a matter of abstract absolutes, it requires a context. You can’t be “secure,” generally — you can only be secure from some risk. For example, having food makes you secure from hunger, but puts you at risk from obesity-related illness.
DRM is designed on the presumption that users don’t want it, and if they could turn it off, they would. You only need DRM to stop users from doing things they’re trying to do and want to do. If the thing the DRM restricts is something no one wants to do anyway, you don’t need the DRM. You don’t need a lock on a door that no one ever wants to open.
DRM assumes that the computer’s owner is its adversary. For DRM to work, there has to be no obvious way to remove, interrupt or fool it. For DRM to work, it has to reside in a computer whose operating system is designed to obfuscate some of its files and processes: to deliberately hoodwink the computer’s owner about what the computer is doing. If you ask your computer to list all the running programs, it has to hide the DRM program from you. If you ask it to show you the files, it has to hide the DRM files from you. Anything less and you, as the computer’s owner, would kill the program and delete its associated files at the first sign of trouble.
An increase in the security of the companies you buy your media from means a decrease in your own security. When your computer is designed to treat you as an untrusted party, you are at serious risk: anyone who can put malicious software on your computer has only to take advantage of your computer’s intentional capacity to disguise its operation from you in order to make it much harder for you to know when and how you’ve been compromised.
Mastering by John Taylor Williams: email@example.com
John Taylor Williams is a audiovisual and multimedia producer based in Washington, DC and the co-host of the Living Proof Brew Cast. Hear him wax poetic over a pint or two of beer by visiting livingproofbrewcast.com. In his free time he makes “Beer Jewelry” and “Odd Musical Furniture.” He often “meditates while reading cookbooks.”
In my latest Guardian column, If GCHQ wants to improve national security it must fix our technology, I argue that computer security isn’t really an engineering issue, it’s a public health issue. As with public health, it’s more important to be sure that our pathogens are disclosed, understood and disclosed than it is to keep them secret so we can use them against our enemies.
Scientists formulate theories that they attempt to prove through experiments that are reviewed by peers, who attempt to spot flaws in the reasoning and methodology. Scientific theories are in a state of continuous, tumultuous improvement as old ideas are overturned in part or whole, and replaced with new ones.
Security is science on meth. There is a bedrock of security that is considered relatively stable – the mathematics of scrambling and descrambling messages – but everything above that bedrock has all the stability of a half-set custard. That is, the best way to use those stable, well-validated algorithms is mostly up for grabs, as the complex interplay of incompatible systems, human error, legacy systems, regulations, laziness, recklessness, naivete, adversarial cunning and perverse commercial incentives all jumble together in ways that open the American retailer Target to the loss of 100m credit card numbers, and the whole internet to GCHQ spying.
As Schneier says: “Anyone can design a security system that works so well that he can’t figure out how to break it.” That is to say, your best effort at security is, by definition, only secure against people who are at least as dumb as you are. Unless you happen to be the smartest person in the world, you need to subject your security system to the kind of scrutiny that scientists use to validate their theories, and be prepared to incrementally patch and refactor things as new errors are discovered and reported
If GCHQ wants to improve national security it must fix our technology
(Image: File:CoughsAndSneezesSpreadDiseases.jpg, Wikimedia Commons, Public Domain)
Yesterday at SXSW, Barton Gellman and I did a one-hour introductory Q&A before Edward Snowden’s appearance. Right after Snowden and his colleagues from the ACLU wrapped up, I sat down and wrote up their event for The Guardian, who’ve just posted my impressions:
My latest Locus column is “Cold Equations and Moral Hazard”, an essay about the way that our narratives about the future can pave the way for bad people to create, and benefit from, disasters. “If being in a lifeboat gives you the power to make everyone else shut the hell up and listen (or else), then wouldn’t it be awfully convenient if our ship were to go down?”
Apparently, editor John W. Campbell sent back three rewrites in which the pilot figured out how to save the girl. He was adamant that the universe must punish the girl.
The universe wasn’t punishing the girl, though. Godwin was – and so was Barton (albeit reluctantly).
The parameters of ‘‘The Cold Equations’’ are not the inescapable laws of physics. Zoom out beyond the page’s edges and you’ll find the author’s hands carefully arranging the scenery so that the plague, the world, the fuel, the girl and the pilot are all poised to inevitably lead to her execution. The author, not the girl, decided that there was no autopilot that could land the ship without the pilot. The author decided that the plague was fatal to all concerned, and that the vaccine needed to be delivered within a timeframe that could only be attained through the execution of the stowaway.
It is, then, a contrivance. A circumstance engineered for a justifiable murder. An elaborate shell game that makes the poor pilot – and the company he serves – into victims every bit as much as the dead girl is a victim, forced by circumstance and girlish naïveté to stain their souls with murder.
Moral hazard is the economist’s term for a rule that encourages people to behave badly. For example, a rule that says that you’re not liable for your factory’s pollution if you don’t know about it encourages factory owners to totally ignore their effluent pipes – it turns willful ignorance into a profitable strategy.
Cold Equations and Moral Hazard
Why DRM is the root of all evil
In my latest Guardian column, What happens with digital rights management in the real world?, I explain why the most important fact about DRM is how it relates to security and disclosure, and not how it relates to fair use and copyright. Most importantly, I propose a shortcut through DRM reform, through a carefully designed legal test-case.
The DMCA is a long and complex instrument, but what I’m talking about here is section 1201: the notorious “anti-circumvention” provisions. They make it illegal to circumvent an “effective means of access control” that restricts a copyrighted work. The companies that make DRM and the courts have interpreted this very broadly, enjoining people from publishing information about vulnerabilities in DRM, from publishing the secret keys hidden in the DRM, from publishing instructions for getting around the DRM – basically, anything that could conceivably give aid and comfort to someone who wanted to do something that the manufacturer or the copyright holder forbade.
Significantly, in 2000, a US appeals court found (in Universal City Studios, Inc v Reimerdes) that breaking DRM was illegal, even if you were trying to do something that would otherwise be legal. In other words, if your ebook has a restriction that stops you reading it on Wednesdays, you can’t break that restriction, even if it would be otherwise legal to read the book on Wednesdays.
In the USA, the First Amendment of the Constitution gives broad protection to free expression, and prohibits government from making laws that abridge Americans’ free speech rights. Here, the Reimerdes case set another bad precedent: it moved computer code from the realm of protected expression into a kind of grey-zone where it may or may not be protected.
In 1997’s Bernstein v United States, another US appeals court found that code was protected expression. Bernstein was a turning point in the history of computers and the law: it concerned itself with a UC Berkeley mathematician named Daniel Bernstein who challenged the American prohibition on producing cryptographic tools that could scramble messages with such efficiency that the police could not unscramble them. The US National Security Agency (NSA) called such programs “munitions” and severely restricted their use and publication. Bernstein published his encryption programs on the internet, and successfully defended his right to do so by citing the First Amendment. When the appellate court agreed, the NSA’s ability to control civilian use of strong cryptography was destroyed. Ever since, our computers have had the power to keep secrets that none may extract except with our permission – that’s why the NSA and GCHQ’s secret anti-security initiatives, Bullrun and Edgehill, targetted vulnerabilities in operating systems, programs, and hardware. They couldn’t defeat the maths (they also tried to subvert the maths, getting the US National Institute for Standards in Technology to adopt a weak algorithm for producing random numbers).
What happens with digital rights management in the real world?
My latest Guardian column, “Digital failures are inevitable, but we need them to be graceful,” talks about evaluating technology based on more than its features — rather, on how you relate to it, and how it relates to you. In particular, I try to make the case for giving especial care to what happens when your technology fails:
Graceful failure is so much more important than fleeting success, but it’s not a feature or a design spec. Rather, it’s a relationship that I have with the technology I use and the systems that are used to produce it.
This is not asceticism. Advocates of software freedom are sometimes accused of elevating ideology over utility. But I use the software I do out of a purely instrumental impulse. The things I do with my computer are the soul of my creative, professional, and personal life. My computer has videos and stills and audio of my daughter’s early life, rare moments of candid memoir from my grandmothers, the precious love letters that my wife and I sent to one another when we courted, the stories I’ve poured my heart and soul into, the confidential and highly sensitive whistleblower emails I’ve gotten from secret sources on investigative pieces; the privileged internal communications of the Electronic Frontier Foundation, a law office to whom I have a duty of care as part of my fellowship (and everything else besides).
Knowing that I can work with this stuff in a way that works is simply not enough. I need to know that when my computer breaks, when the software is discontinued, when my computer is lost or stolen, when a service provider goes bust or changes ownership and goes toxic, when a customs officer duplicates my hard-drive at border, when my survivors attempt to probate my data – when all of that inevitable stuff happens, that my digital life will be saved. That data that should remain confidential will not leak. That data that should be preserved will be. That files that should be accessible can be accessed, without heroic measures to run obsolete software on painstakingly maintained superannuated hardware.
Digital failures are inevitable, but we need them to be graceful
(Image: Smashed, a Creative Commons Attribution (2.0) image from sarahbaker’s photostream)
In my latest Locus column, “Cheap Writing Tricks,” I ruminate on what makes fiction work — why we perceive stories as stories, why we care about characters, and how the construction of stories interacts with the human mind (and why How to Win Friends and Influence People is a great writing tool).
In my latest Guardian column, I explain how UK prime minister David Cameron’s plan to opt the entire nation into a programme of Internet censorship is the worst of all worlds for kids and their parents. Cameron’s version of the Iranian “Halal Internet” can’t possibly filter out all the bad stuff, nor can it avoid falsely catching good stuff we want our kids to see (already the filters are blocking websites about sexual health and dealing with “porn addiction”). That means that our kids will still end up seeing stuff they shouldn’t, but that we parents won’t be prepared for it, thanks to the false sense of security we get from the filters.
In my latest Guardian column, I suggest that we have reached “peak indifference to spying,” the turning point at which the number of people alarmed by surveillance will only grow. It’s not the end of surveillance, it’s not even the beginning of the end of surveillance, but it’s the beginning of the beginning of the end of surveillance.
We have reached the moment after which the number of people who give a damn about their privacy will only increase. The number of people who are so unaware of their privilege or blind to their risk that they think “nothing to hide/nothing to fear” is a viable way to run a civilisation will only decline from here on in.
And that is the beginning of a significant change.
Like all security, privacy is hard. It requires subtle thinking, and the conjunction of law, markets, technology and norms to get right. All four of those factors have been sorely lacking.
The default posture of our devices and software has been to haemorrhage our most sensitive data for anyone who cared to eavesdrop upon them. The default posture of law – fuelled by an unholy confluence of Big Data business models and Greater Manure Pile surveillance – has been to allow for nearly unfettered collection by spies, companies, and companies that provide data to spies. The privacy norm has been all over the place, but mostly dominated by nothing-to-hide. And thanks to the norm, the market for privacy technology has been nearly nonexistent – people with “nothing to fear” won’t pay a penny extra for privacy technology.
We cannot afford to be indifferent to internet spying
(Image: Anonymity, Privacy, and Security Online/Pew Center)
My new Locus column, Collective Action, proposes a theory of corruption: the relatively small profits from being a jerk are concentrated, the much larger effects are diffused, which means that the jerks can afford better lawyers and lobbyists than any one of their victims. Since the victims are spread out and don’t know each other, it’s hard to fight back together.
Then I propose a solution: using Kickstarter-like mechanisms to fight corruption: a website where victims of everything from patent trolls and copyright trolls, all the way up to pollution and robo-signing foreclosures, can find each other and pledge to fund a group defense, rather than paying off the bandits.
It’s the Magnificent Seven business model: one year, the villagers stop paying the robbers, and use the money to pay mercenaries to fight the robbers instead.