Opt In, Opt Out, and When the Private Goes Public

Social media creates a strange environment for many of us. Even as we are broadcasting information to the world, we seem to forget, sometimes, that this means the world can read it. If you say something on the Internet, theoretically anyone can find it, especially when it is attached to identifying information like your name, your face, where you live. People attempting to fly under the radar may obscure this information, but it is still difficult to remain completely invisible on the Internet; if you want to share aspects of your life with the world around you, people you aren’t expecting can read their way into your life.

This was brought home with a Wall Street Journal article earlier this year on the illusion of control, the idea that people think they control what they put out there, and thus take more risks when they post status updates, blog, and participate in social media. If you think you’re in control of the environment, how your content is used, and who sees it, you think you’re more secure; think of wandering around naked in your house because you think the blinds are closed, only to discover that someone came through and opened them all without your awareness or consent.

In fact, that’s a good analogy for the handling of privacy on a lot of social media platforms, because they thrive on lack of privacy. They feed on the overshare; it’s what drives user participation, and it’s also what drives advertisers and investors. The more information a platform has about its users, and the more it forces people to live in the public eye, the more data it can build up, and the more users is can woo. Hence, slippery privacy policies and imprecise implementations of privacy settings.

Like the ability to be added to a Facebook group with a name that exposes a part of your life you might prefer remain private, with no control over being added and choosing who gets to see that information. Services like Facebook, Google, and Twitter offer advanced privacy settings intended to effectively lock your account, but that doesn’t mean you’re entirely safe. You’re not safe from other users republishing your content, for example, and you’re not safe from inadvertently not using the settings correctly, especially when they’re so byzantine that it’s extremely hard to tell how to set them up to protect yourself.

And you’re really not safe from changes at the site itself that affect the way your information is displayed. Google and Facebook are both infamous for opt-out settings, rather than opt-in options which force people to decide whether they want information exposed. The end result is being left standing naked in your living room, being told that you can always just lower the blinds if you want them open. There’s no remorse there, let alone critical thinking about how exposing users might be a problem, and how users might be driven away by feeling like their supposedly locked and protected accounts are not, in fact, as safe as they think they are.

At the same time that users are worried about privacy, they’re also sharing, and sometimes those who are the most worried are the ones sharing the most. They’re the ones concerned about privacy in the first place who think they have the controls locked down and can talk freely, or they’re the ones who realise that disclosing intimate information could put them at risk, so they take the time to implement the best privacy controls they can find. The people who are most vulnerable to safety breaches are acutely aware of that in many cases, and yet, they’re still not being served by the sites they belong to.

Herein exists a tension between users, advertisers, site owners; the user is the product, and the health and safety of individual products is not of significant concern to the owner. Facebook doesn’t care if a gay teen is outed because it doesn’t affect their larger bottom line. At most, there will be a brief media furore and the teen might leave the site, possibly taking some friends, possibly not. Google doesn’t care if it exposes a list of someone’s contacts to the world at large, because memories are short and it’s one customer among scores, a drop in the bucket compared to the bigger picture, especially when you consider the fact that social media exists to collate data about people.

The more people, the more data, and the more control the site, not the user, has over the data, the better the position for the company. These sites don’t have a vested interest in protecting their users although they may pay lip service to the idea, especially when confronted with breaches and challenged to do better. Their engineers aren’t thinking about how to make sure their services are safe and comfortable to use, they’re thinking about how to streamline information gathering, processing, and display. And many of them aren’t approaching development from the perspective of privacy advocates, of people concerned about issues like stalking, abusive ex-partners, and other safety threats that could be an issue for users who want to be able to interact without endangering themselves.

The illusion of control becomes dangerous, leading people to believe both that they are protecting themselves and that the site they belong to is interested in protecting them and creating a safe user experience. They let down their guard because they believe they have reason to trust, and in doing so, expose themselves to significant risks. The cycle of private going public repeats itself, and with each iteration, companies assure users they’re reforming, which only serves to reinforce the illusion of control. You’ll be fine, we really learned from that experience. Stay with us.