I wrote a piece for MIT's Technology Review on the way that Internet privacy works, and the deficiency of our tools -- browsers, phones -- in protecting it:
Even if you read the fine print, human beings are awful at pricing out the net present value of a decision whose consequences are far in the future. No one would take up smoking if the tumors sprouted with the first puff. Most privacy disclosures don't put us in immediate physical or emotional distress either. But given a large population making a large number of disclosures, harm is inevitable. We've all heard the stories about people who've been fired because they set the wrong privacy flag on that post where they blew off on-the-job steam.
The risks increase as we disclose more, something that the design of our social media conditions us to do. When you start out your life in a new social network, you are rewarded with social reinforcement as your old friends pop up and congratulate you on arriving at the party. Subsequent disclosures generate further rewards, but not always. Some disclosures seem like bombshells to you ("I'm getting a divorce") but produce only virtual cricket chirps from your social network. And yet seemingly insignificant communications ("Does my butt look big in these jeans?") can produce a torrent of responses. Behavioral scientists have a name for this dynamic: "intermittent reinforcement." It's one of the most powerful behavioral training techniques we know about. Give a lab rat a lever that produces a food pellet on demand and he'll only press it when he's hungry. Give him a lever that produces food pellets at random intervals, and he'll keep pressing it forever.
The Curious Case of Internet Privacy
Leave a Reply