Bruce Schneier's Keynote: Following the Money, or Why Security has so Little to do with Security From ToorCon 2003, www.toorcon.org San Diego, CA Impressionistic transcript by Cory Doctorow doctorow@craphound.com Sept 27, 2003 -- Security is always a question of what are you getting and what are you giving up. You can prevent 9-11 by grounding all the aircraft. That's not sustainable in the long-term, but we can do it in the short-term. CAPPS II, home-alarms, etc -- they're all trade-offs. Is the added security of using a hotel deadbolt worth the work. To make the trade off, you need to understand how well it works (is it a good lock?), what the threat is (who barges into hotel rooms anyway?), and what it costs (how hard is it?). * Is the countermeasure effecting in mitigating your risk? * Are the trade-offs worth the security? These questions aren't decided by you often -- they're decided by others. You don't get to choose whether your bag is being searched. In my new book, I came up with a 5-step process for figuring out security. It doesn't give you the answer, but it'll tell you what the questions are: 1. What assets are you trying to protect? Post-911, we didn't ask that question right -- what problem does a national ID solve? 2. What are the risks to those assets? There are many risks: criminals, terrorists and insiders. A security measure that prevents bombs in laptops may increase the risk of stolen laptops at airport.s. 3. How well does the countermeasure work? 4. What other risks does the solution cause? If we're going to use TIA, who gets to be in charge and why should we trust them? How do you stop the additional abuse risk? 5. What are the costs and trade-offs? Our security budget isn't infinite, so we want to get the most for our security dollar. What's the cost in convenience, cash, liberty? IS IT WORTH IT? Having evaluated what I'm getting and what I'm giving up, is it worth it? -- Why is security so rarely about security? * People don't evaluate this * People succumb to fear * People do things counter to their security -- they don't safeguard their Internet privacy despite avowals of its importance * People say one thing and do another -- Agendas: * Every security decision affects multiple players -- who gets to make the decision is important * Pilots want guns in the cockpit, stews don't * Politicians want to be perceived as strong * During the DC Sniper, the murder rate in the counties where it occurred doubled, but people acted like the murder rate had increased tenfold, principals cancelled school events because they'd get fired if they held an assembly where a kid got shot -- nothing to do with the actual risk -- Threats are complicated: Security systems have to let the good guys in and keep the bad guys out. If it only keeps bad guys out, it's useless. They can fail by keeping good users out or by letting bad users in. There are way more legit users than attackers, so it's likely that your measure will punish the innocent than the guilty. Imagine an airport system that shoots terrorists automatically and fails one in 100 times. In a year, you'll kill one terrorist and millions of innocents. One interesting possible outcome is diversion: if I have a home-alarm and an attacker robs my neighbor, I've won. But if I'm the mayor, I haven't -- because the crime-rate is still up. If Logan's security shifts terrorists to LaGuardia, the Boston Port Authority has won, but the nation hasn't. The defender can't implement his own security. Someone decides to keep corkscrews off airplanes, so he has to hire the TSA to keep them off the planes. The people and systems are imperfect, and the policy won't counter the threat. The asset-owner has to deal with risks -- the frequency, probability and severity of attacks. But he has to address PERCEIVED risks -- which are changed by fear, publicity, etc. The risks get munged before they get to the owner. The asset-owner has to consider moral, social, technical considerations. Can we torture crooks? Legal and economic. Can we watch dressing-rooms? Can we get rid of them? Legit users have feedback on the owner: I won't buy clothes without a dressing room. I won't get on an airplane if you take away my laptop. Trusted people might rebel: we can't ask the TSA to do something really gross. How the security measure defends against attackers is an extremely minor element. It's almost irrelevant when compared to other stuff. -- Security is a negotiation * All players have to negotiate * If you don't have any power in the negotiation, there's not much you can do to increase security * You can't say no to a hotel that asks for photo ID * You're not completely powerless, but you don't have direct power -- the wrong time to negotiate airport security is while you're walking through it -- Power: * Govts, industries, companies, org have direct power * Diffuse people have less For example: * Counterfeit: It's not in you best interest to detect phony money in your walled. Any security system where the govt enlists you detect counterfeits won't work. You don't want to find it. * Salesclerks and credit card verification: Clerks get paid whether the credit card goes through or not -- his boss might check, but he won't. "Your purchase free if you don't get a receipt." This is brilliant. Employees steal out of the till by not ringing it up -- he takes $4 from you and doesn't ring it up, and it's undetectable to the audit. So the owner wants the clerk to give you a receipt. So the owner gives you an incentive to watch the clerk and ensure that he gives you a receipt. That's ju-jitsu security. * Banning things on airplanes: knitting needles were banned on airplanes, but lighters weren't, because the cig lobby got to the TSA. * Banks' verification of signatures on checks: Banks don't verify sigs except on big checks. The bank doesn't care if the check is good or not: you do. The customer will tell the bank if there's a problem. If the customer doesn't notice, there's no problem. If a customer complains, the bank immediately reverses the charge and then investigates. -- How do you change an agenda? * Govt intervention: laws or regs. We can make it illegal for airlines to hand out personal info. The company will follow laws if they can get busted. Without a law, they'll do the thing that makes them money. We can make software companies liable for security vulnerabilities. When MSFT ships insecure code, they don't pay the cost, we do. The cost of a MSFT vulnerability is an externality. Shifting the liability will create an incentive to fix things. * Market forces: reinforcing cockpit doors has always been fought by airlines, until market forces convinced them to. We can demand things. Remember the Tylenol poisonings? In their wake, we got tamper-proof packaging. Spend 5 min thinking about it and you'll realize it does no good, just think of penetrating it with a syringe. There's no reg that requires it. But market forces demanded reassurance from the vendor. Customers were assuaged by the measure. * Social norms: whether you lock your bicycle is different in the US and in Japan. Post-911, our mood changed to allow us to make changes regardless of whether they did any good. * Technology: Wireless burglar alarms were cheaper and more people installed them: changed the cost of the trade-off. Palladium will change security for better of for worse. The method that makes sense in a given situation depends on that situation. -- Aligning interests with capabilities * We want to get the most security for the least trade-off * Determine the acceptable risk-level * Figure out the trade-offs THE BEST WAY TO DO THIS IS TO MAKE THE PERSON WHO CAN FIX THE PROBLEM ON THE HOOK FOR FIXING THE PROBLEM. We have no choice but to accept some residual risk. "No terrorism is acceptable" in nonsense: there IS an amount of rat-droppings that are acceptable in your breakfast cereal. Some risk is inherent in everything. We've decided that 40k auto deaths/year is OK. In the end, there's an amt of danger that we are willing to accept. -- As an individual, you have no power in many negotiations. In aggregate, you have considerable power: as a voting block, as a member of an advocacy org. Exercise your political muscle! -- Buy and read "Beyond Fear: Thinking Sensibly about Security in an Uncertain World" http://www.schneier.com/bf.html [Ed: I second this motion] Subscribe to Crypto-Gram, a free monthly infosec newsletter http://www.schneier.com/crypto-gram.html [Ed: this one, too] -- Q&A: Last Weds, a bunch of security guys published a report that said "monoculture is a security risk." Overreliance on MSFT OSes is a risk -- like relying on one species of potato in Ireland or one species of cotton in the US is a risk. Dan Geer led this, and works for @Stake, a company that does a lot of MSFT contracting fired him on Thursday and back-dated it to Tuesday. Dan found out about it by looking at the company website and seeing his name was gone. When they did this, the report got a lot of press. It was stupid: security experts speak with honesty and integrity on their own behalf and that's why you hire is. -- Apropos of that, homogeneity can be mitigated by redundancy -- paramecia are prolific even though they're heterogenous. Different strategies work for different organisms -- lobsters lay thousands of eggs and ignore them, primates birth a few babies and take care of them -- I think a cyberwar is really unlikely. Cyber attacks aren't easy. We get bad worms, but we can't predict the outcome -- hard to figure out what a worm will do. You can do way more damage by filling a car with explosives and driving into a building. When a paging sat goes down and you can't get your messages, that's not a cyber-war, that's cyber-inconvenience. The only genuine "terroristic" attack I can find is a guy in Australia who compromised a system and dumped stinky effluent into a city. Seagulls can sit on a transformer and black-out a city. They're more dangerous than cyber-attackers. -- The risk that Palladium addresses is competition from open-source, not worms. Palladium is a lock-in technology in the guise of security. -- Re voting machines: * Anything without an audit trail is bad. * Audits are how we secure elections * The best way to do an election is to count hands. The reason we can't do that is anonymity. That's the hard part. * People said, "We can secure financial transactions, why not elections?" Anonymity is why. -- Agendas are always hijacked * Govt hijacks terrorism * MSFT hijacks media-companies' anti-piracy * Media companies hijack piracy outrage to enforce use-restriction on honest users -- Future of cryptanalysis: I do very little of this these days. There's lots to be discovered, but we have the math pretty much down. I don't think there will be overturns of decades of work. The big problem isn't technology anymore. Even bad math in crypto isn't the weak link -- the weak link is human. eof