I was on American Public Media’s Marketplace yesterday talking (MP3) about our posting of a rarer-than-rare Disney treasure, the never-before-seen original prospectus for Disneyland, scanned before it was sold to noted jerkface Glenn Beck, who has squirreled it away in his private Scrooge McDuck vault.
All About:
Articles
For months, I’ve been following the story that the Mozilla project was set to add closed source Digital Rights Management technology to its free/open browser Firefox, and today they’ve made the announcement, which I’ve covered in depth for The Guardian. Mozilla made the decision out of fear that the organization would haemorrhage users and become irrelevant if it couldn’t support Netflix, Hulu, BBC iPlayer, Amazon Video, and other services that only work in browsers that treat their users as untrustable adversaries.
They’ve gone to great — even unprecedented — lengths to minimize the ways in which this DRM can attack Firefox users. But I think there’s more that they can, and should, do. I also am skeptical of their claim that it was DRM or irrelevance, though I think they were sincere in making it. I think they hate that it’s come to this and that no one there is happy about it.
I could not be more heartsick at this turn of events.
We need to turn the tide on DRM, because there is no place in post-Snowden, post-Heartbleed world for technology that tries to hide things from its owners. DRM has special protection under the law that makes it a crime to tell people if there are flaws in their DRM-locked systems — so every DRM system is potentially a reservoir of long-lived vulnerabilities that can be exploited by identity thieves, spies, and voyeurs.
It’s clear that Mozilla isn’t happy about this turn of events, and in our conversations, people there characterised it as something they’d been driven to by the entertainment companies and the complicity of the commercial browser vendors, who have enthusiastically sold out their users’ integrity and security.
Mitchell Baker, the executive chairwoman of the Mozilla Foundation and Mozilla Corporation, told me that “this is not a happy day for the web” and “it’s not in line with the values that we’re trying to build. This does not match our value set.”
But both she and Gal were adamant that they felt that they had no choice but to add DRM if they were going to continue Mozilla’s overall mission of keeping the web free and open.
I am sceptical about this claim. I don’t doubt that it’s sincerely made, but I found the case for it weak. When I pressed Gal for evidence that without Netflix Firefox users would switch away, he cited the huge volume of internet traffic generated by Netflix streams.
There’s no question that Netflix video and other video streams account for an appreciable slice of the internet’s overall traffic. But video streams are also the bulkiest files to transfer. That video streams use a lot of bytes isn’t a surprise.
When a charitable nonprofit like Mozilla makes a shift as substantial as this one – installing closed-source software designed to treat computer users as untrusted adversaries – you’d expect there to be a data-driven research story behind it, meticulously documenting the proposition that without DRM irrelevance is inevitable. The large number of bytes being shifted by Netflix is a poor proxy for that detailed picture.
There are other ways in which Mozilla’s DRM is better for user freedom than its commercial competitors’. While the commercial browsers’ DRM assigns unique identifiers to users that can be used to spy on viewing habits across multiple video providers and sessions, the Mozilla DRM uses different identifiers for different services.
In my latest Guardian column, ‘Cybersecurity’ begins with integrity, not surveillance, I try to make sense of the argument against surveillance. Is mass surveillance bad because it doesn’t catch “bad guys” or because it is immoral? There’s a parallel to torture — even if you can find places where torture would work to get you some useful information, it would still be immoral. Likewise, I’ve come to realize that the “it doesn’t work” argument isn’t one that I want to support anymore, because even if mass surveillance did work, it would still be bad.
One thing that parenting has taught me is that surveillance and experimentation are hard to reconcile. My daughter is learning, and learning often consists of making mistakes constructively. There are times when she is working right at the limits of her abilities – drawing or dancing or writing or singing or building – and she catches me watching her and gets this look of mingled embarrassment and exasperation, and then she changes back to some task where she has more mastery. No one – not even a small child – likes to look foolish in front of other people.
Putting whole populations – the whole human species – under continuous, total surveillance is a profoundly immoral act, no matter whether it works or not. There no longer is a meaningful distinction between the digital world and the physical world. Your public transit rides, your love notes, your working notes and your letters home from your journeys are now part of the global mesh of electronic communications. The inability to live and love, to experiment and err, without oversight, is wrong because it’s wrong, not because it doesn’t catch bad guys.
Everyone from Orwell to Trotsky recognised that control over information means control over society. On the eve of the November Revolution, Trotsky ordered the Red Guard to seize control over the post and telegraph offices. I mentioned this to Jacob Appelbaum, who also works on many spy-resistant information security tools, like Tor (The Onion Router, a privacy and anonymity tool for browsing the web), and he said, “A revolutionary act today is making sure that no one can ever seize control over the network.”
In my latest Locus column, How to Talk to Your Children About Mass Surveillance, I tell the story of how I explained the Snowden leaks to my six-year-old, and the surprising interest and comprehension she showed during our talk and afterwards. Kids, it seems, intuitively understand what it’s like to be constantly monitored by unaccountable, self-appointed authority figures!
So I explained to my daughter that there was a man who was a spy, who discovered that the spies he worked for were breaking the law and spying on everyone, capturing all their e-mails and texts and video-chats and web-clicks. My daughter has figured out how to use a laptop, phone, or tablet to peck out a message to her grandparents (autocomplete and spell-check actually make typing into an educational experience for kids, who can choose their words from drop-down lists that get better as they key in letters); she’s also used to videoconferencing with relatives around the world. So when I told her that the spies were spying on everything, she had some context for it.
Right away, we were off to the races. ‘‘How can they listen to everyone at once?’’ ‘‘How can they read all those messages?’’ ‘‘How many spies are there?’’ I told her about submarine fiber-optic taps, prismatic beam-splitters, and mass databases. Again, she had a surprising amount of context for this, having encountered digital devices whose capacity was full – as when we couldn’t load more videos onto a tablet – and whose capacities could be expanded with additional storage.
My latest Guardian column, Internet service providers charging for premium access hold us all to ransom, explains what’s at stake now that the FCC is prepared to let ISPs charge services for “premium” access to its subscribers. It’s pretty much the worst Internet policy imaginable, an anti-innovation, anti-democratic, anti-justice hand-grenade lobbed by telcos who shout “free market” while they are the beneficiaries of the most extreme industrial government handouts imaginable.
The FCC promised a fix, and here it is: FCC chairman Tom Wheeler, an Obama appointee and former cable lobbyist, has drawn up rules to allow ISPs to decide which communications you can see in a timely, best-effort fashion and which services will be also-ran laggards. In so doing, Chairman Wheeler sets the stage for a further magnification of the distorting influence of money and incumbency on our wider society. Political candidates whose message is popular, but who lack the budget to bribe every ISP to deliver it in a timely fashion, will be less equipped to reach voters than their better-financed rivals. A recent study looked at 20 years’ worth of US policy outcomes and found that they exclusively responded to the needs of the richest 10% of Americans. Now the FCC is proposing to cook the process further, so that the ability of the ignored 90% to talk to one another, network and organise and support organisations that support their interests will be contingent on their ability to out-compete the already advantaged elite interests in the race to bribe carriers for “premium” coverage.
If you think of a business idea that’s better than any that have come before – if you’re ready to do to Google what Google did to Altavista; if you’re ready to do to the iPod what the iPod did to the Walkman; if you’re ready to do to Netflix what Netflix did to cable TV – you have to start out with a bribery warchest that beats out the firms that clawed their way to the top back when there was a fairer playing-field.
The FCC and its apologists will shrug and say that the ISPs are businesses and they own their lines and can do what they want with them. They’ll say that we can’t expect the carriers to invest in next-generation networks if they can’t maximise their profits from them.
But this is nonsense. The big US carriers are already deriving bumper profits from their ISP business, while their shareholder disclosures show that they’re making only the most cursory investment in new network infrastructure (Americans have been waiting for fast “fiber-to-the-kerb” connectivity for decades, mostly what they’re getting is “fiber-to-the-press-release” puff pieces from ISPs who gull uncritical reporters into repeating their empty promises of fast networks, just around the corner).
Internet service providers charging for premium access hold us all to ransom [Cory Doctorow/The Guardian]
(Image: Evidence A: The Ransom Note, Jared and Corin, CC-BY)
My new Guardian column is “Why it is not possible to regulate robots,” which discusses where and how robots can be regulated, and whether there is any sensible ground for “robot law” as distinct from “computer law.”
One thing that is glaringly absent from both the Heinleinian and Asimovian brain is the idea of software as an immaterial, infinitely reproducible nugget at the core of the system. Here, in the second decade of the 21st century, it seems to me that the most important fact about a robot – whether it is self-aware or merely autonomous – is the operating system, configuration, and code running on it.If you accept that robots are just machines – no different in principle from sewing machines, cars, or shotguns – and that the thing that makes them “robot” is the software that runs on a general-purpose computer that controls them, then all the legislative and regulatory and normative problems of robots start to become a subset of the problems of networks and computers.
If you’re a regular reader, you’ll know that I believe two things about computers: first, that they are the most significant functional element of most modern artifacts, from cars to houses to hearing aids; and second, that we have dramatically failed to come to grips with this fact. We keep talking about whether 3D printers should be “allowed” to print guns, or whether computers should be “allowed” to make infringing copies, or whether your iPhone should be “allowed” to run software that Apple hasn’t approved and put in its App Store.
Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can’t.
Here’s a reading (MP3) of a my November, 2013 Locus column, Collective Action, in which I propose an Internet-enabled “Magnificent Seven” business model for foiling corruption, especially copyright- and patent-trolling. In this model, victims of extortionists find each other on the Internet and pledge to divert a year’s worth of “license fees” to a collective defense fund that will be used to invalidate a patent or prove that a controversial copyright has lapsed. The name comes from the classic film The Magnificent Seven (based, in turn, on Akira Kurosawa’s Seven Samurai) in which villagers decide one year to take the money they’d normally give to the bandits, and turn it over to mercenaries who kill the bandits.
Why has Warner gotten away with its theft of ‘‘Happy Birthday’’ for so long? Because the interests of all the people who pay the license fee are diffused, and Warner’s interests are concentrated. For any one licensor, the rational course of action is paying Warner, rather than fighting in court. For Warner, the rational course is fighting in court, every time.
In this regard, Warner is in the same position as copyright and patent trolls: the interests of the troll are concentrated. Their optimal strategy is to fight back when pushed. But it’s the reverse for their victims: the best thing for them to do is to settle.
Collectively, though, the victims are always out more than the cost of a defense. That is, all the money made by a troll from a single stupid patent is much more than the cost of fighting to get the patent invalidated. All the money made by Warner on ‘‘Happy Birthday’’ dwarfs the expense of proving, in court, that they weren’t entitled to any of it.
The reason the victims don’t get together to fight back is that they don’t know each other and have no way to coordinate among each other. In economists’ jargon, they have a ‘‘collective action problem.’’
Mastering by John Taylor Williams: wryneckstudio@gmail.com
John Taylor Williams is a audiovisual and multimedia producer based in Washington, DC and the co-host of the Living Proof Brew Cast. Hear him wax poetic over a pint or two of beer by visiting livingproofbrewcast.com. In his free time he makes “Beer Jewelry” and “Odd Musical Furniture.” He often “meditates while reading cookbooks.”
What happens with digital rights management in the real world?
Podcast: What happens with digital rights management in the real world?
Here’s a reading (MP3) of a recent Guardian column, What happens with digital rights management in the real world where I attempt to explain the technological realpolitik of DRM, which has nothing much to do with copyright, and everything to do with Internet security.
The entertainment industry calls DRM “security” software, because it makes them secure from their customers. Security is not a matter of abstract absolutes, it requires a context. You can’t be “secure,” generally — you can only be secure from some risk. For example, having food makes you secure from hunger, but puts you at risk from obesity-related illness.
DRM is designed on the presumption that users don’t want it, and if they could turn it off, they would. You only need DRM to stop users from doing things they’re trying to do and want to do. If the thing the DRM restricts is something no one wants to do anyway, you don’t need the DRM. You don’t need a lock on a door that no one ever wants to open.
DRM assumes that the computer’s owner is its adversary. For DRM to work, there has to be no obvious way to remove, interrupt or fool it. For DRM to work, it has to reside in a computer whose operating system is designed to obfuscate some of its files and processes: to deliberately hoodwink the computer’s owner about what the computer is doing. If you ask your computer to list all the running programs, it has to hide the DRM program from you. If you ask it to show you the files, it has to hide the DRM files from you. Anything less and you, as the computer’s owner, would kill the program and delete its associated files at the first sign of trouble.
An increase in the security of the companies you buy your media from means a decrease in your own security. When your computer is designed to treat you as an untrusted party, you are at serious risk: anyone who can put malicious software on your computer has only to take advantage of your computer’s intentional capacity to disguise its operation from you in order to make it much harder for you to know when and how you’ve been compromised.
Mastering by John Taylor Williams: wryneckstudio@gmail.com
John Taylor Williams is a audiovisual and multimedia producer based in Washington, DC and the co-host of the Living Proof Brew Cast. Hear him wax poetic over a pint or two of beer by visiting livingproofbrewcast.com. In his free time he makes “Beer Jewelry” and “Odd Musical Furniture.” He often “meditates while reading cookbooks.”
In my latest Guardian column, If GCHQ wants to improve national security it must fix our technology, I argue that computer security isn’t really an engineering issue, it’s a public health issue. As with public health, it’s more important to be sure that our pathogens are disclosed, understood and disclosed than it is to keep them secret so we can use them against our enemies.
Scientists formulate theories that they attempt to prove through experiments that are reviewed by peers, who attempt to spot flaws in the reasoning and methodology. Scientific theories are in a state of continuous, tumultuous improvement as old ideas are overturned in part or whole, and replaced with new ones.
Security is science on meth. There is a bedrock of security that is considered relatively stable – the mathematics of scrambling and descrambling messages – but everything above that bedrock has all the stability of a half-set custard. That is, the best way to use those stable, well-validated algorithms is mostly up for grabs, as the complex interplay of incompatible systems, human error, legacy systems, regulations, laziness, recklessness, naivete, adversarial cunning and perverse commercial incentives all jumble together in ways that open the American retailer Target to the loss of 100m credit card numbers, and the whole internet to GCHQ spying.
As Schneier says: “Anyone can design a security system that works so well that he can’t figure out how to break it.” That is to say, your best effort at security is, by definition, only secure against people who are at least as dumb as you are. Unless you happen to be the smartest person in the world, you need to subject your security system to the kind of scrutiny that scientists use to validate their theories, and be prepared to incrementally patch and refactor things as new errors are discovered and reported
If GCHQ wants to improve national security it must fix our technology
(Image: File:CoughsAndSneezesSpreadDiseases.jpg, Wikimedia Commons, Public Domain)
Yesterday at SXSW, Barton Gellman and I did a one-hour introductory Q&A before Edward Snowden’s appearance. Right after Snowden and his colleagues from the ACLU wrapped up, I sat down and wrote up their event for The Guardian, who’ve just posted my impressions: