How Facebook and Other Sites Manipulate Your Privacy Choices

Social media platforms repeatedly use so-called dark patterns to nudge you toward giving away more of your data.
Illustration of arm reaching through flaming hoops to hit button that says no thanks
Illustration: Sam Whitney

In 2010, the Electronic Frontier Foundation was fed up with Facebook’s pushy interface. The platform had a way of coercing people into giving up more and more of their privacy. The question was, what to call that coercion? Zuckermining? Facebaiting? Was it a Zuckerpunch? The name that eventually stuck: Privacy Zuckering, or when “you are tricked into publicly sharing more information about yourself than you really intended to.”

A decade later, Facebook has weathered enough scandals to know that people care about those manipulations; last year, it even paid a $5 billion fine for making “deceptive claims about consumers’ ability to control the privacy of their personal data.” And yet researchers have found that Privacy Zuckering and other shady tactics remain alive and well online. They’re especially rampant on social media, where managing your privacy is, in some ways, more confusing than ever.

Here’s an example: A recent Twitter pop-up told users “You’re in control,” before inviting them to “turn on personalized ads” to “improve which ones you see” on the platform. Don’t want targeted ads while doomscrolling? Fine. You can “keep less relevant ads.” Language like that makes Twitter sound like a sore loser.

Actually, it’s an old trick. Facebook used it back in 2010 when it let users opt out of Facebook partner websites collecting and logging their publicly available Facebook information. Anyone who declined that “personalization” saw a pop-up that asked, “Are you sure? Allowing instant personalization will give you a richer experience as you browse the web.” Until recently, Facebook also cautioned people against opting out of its facial-recognition features: “If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you.” The button to turn the setting on is bright and blue; the button to keep it off is a less eye-catching grey.

Researchers call these design and wording decisions “dark patterns,” a term applied to UX that tries to manipulate your choices. When Instagram repeatedly nags you to “please turn on notifications,” and doesn’t present an option to decline? That’s a dark pattern. When LinkedIn shows you part of an InMail message in your email, but forces you to visit the platform to read more? Also a dark pattern. When Facebook redirects you to “log out” when you try to deactivate or delete your account? That’s a dark pattern too.

Dark patterns show up all over the web, nudging people to subscribe to newsletters, add items to their carts, or sign up for services. But, says says Colin Gray, a human-computer interaction researcher at Purdue University, they’re particularly insidious “when you’re deciding what privacy rights to give away, what data you’re willing to part with.” Gray has been studying dark patterns since 2015. He and his research team have identified five basic types: nagging, obstruction, sneaking, interface interference, and forced action. All of those show up in privacy controls. He and other researchers in the field have noticed the cognitive dissonance between Silicon Valley’s grand overtures toward privacy and the tools to modulate these choices, which remain filled with confusing language, manipulative design, and other features designed to leech more data.

Those privacy shell games aren’t limited to social media. They’ve become endemic to the web at large, especially in the wake of Europe’s General Data Protection Regulation. Since GDPR went into effect in 2018, websites have been required to ask people for consent to collect certain types of data. But some consent banners simply ask you to accept the privacy policies—with no option to say no. “Some research has suggested that upwards of 70 percent of consent banners in the EU have some kind of dark pattern embedded in them,” says Gray. “That’s problematic when you're giving away substantial rights.”

Recently, sites like Facebook and Twitter have begun to give their users more fine-grained control of their privacy on the website. Facebook’s newly rolled out Privacy Checkup, for instance, guides you through a series of choices with brightly colored illustrations. But Gray notes that the defaults are often set with less privacy in mind, and the many different checkboxes can have the effect of overwhelming users. “If you have a hundred checkboxes to check, who’s going to do that,” he says.

Last year, US senators Mark Warner and Deb Fischer introduced a bill that would ban these kinds of “manipulative user interfaces.” The Deceptive Experiences to Online Users Reduction Act— DETOUR for short—would make it illegal for websites like Facebook to use dark patterns when it relates to personal data. “Misleading prompts to just click the ‘OK’ button can often transfer your contacts, messages, browsing activity, photos, or location information without you even realizing it,” Senator Fischer wrote when the bill was introduced. “Our bipartisan legislation seeks to curb the use of these dishonest interfaces and increase trust online.”

The problem is that it becomes very difficult to define a dark pattern. “All design has a level of persuasion to it,” says Victor Yocco, the author of Design for the Mind: Seven Psychological Principles of Persuasive Design. By definition, design encourages someone to use a product in a particular way, which isn’t inherently bad. The difference, Yocco says, is “if you’re designing to trick people, you’re an asshole.”

Gray has also run into difficulty drawing the line between dark patterns and plain bad design.

“It’s an open question,” he says. “Are they defined by the designer’s intent, or the perception in use?” In a recent paper, Gray looked at how people on the subreddit r/AssholeDesign make ethical calculations of design choices. The examples on that subreddit range from the innocuous (automatic updates on Windows software) to the truly evil (an ad on Snapchat that makes it look like a hair has fallen on your screen, forcing you to swipe up). After combing through the examples, Gray created a framework that defines “asshole design” as one that takes away user choice, controls the task flow, or entraps users into a decision that benefits not them, but the company. Asshole designers also use strategies like misrepresentation, nickel-and-diming, two-faced interactions—like advertising an ad blocker that also contains ads.

Many of these dark patterns are used to juice metrics that indicate success, like user growth or time spent. Gray cites an example from the smartphone app Trivia Crack, which nags its users to play another game every two to three hours. Those kinds of spammy notifications have been used by social media platforms for years to induce the kind of FOMO that keeps you hooked. “We know if we give people things like swiping or status updates, it’s more likely that people will come back and see it again and again,” says Yocco. “That can lead to compulsive behaviors.”

The darkest patterns of all arise when people try to leave these platforms. Try to deactivate your Instagram account and you’ll find it’s exceptionally hard. First, you can’t even do it from the app. From the desktop version of the site, the setting is buried inside of “Edit Profile” and comes with a series of interstitials. (Why are you disabling? Too distracting? Here, try turning off notifications. Just need a break? Consider logging out instead.)

“It’s putting friction in the way of attaining your goal, to make it harder for you to follow through,” says Nathalie Nahai, the author of Webs of Influence: The Psychology of Online Persuasion. Years ago, when Nahai deleted her Facebook account, she found a similar set of manipulative strategies. “They used the relationships and connections I had to say, ‘Are you sure you want to quit? If you leave, you won’t get updates from this person,’” and then displayed the pictures of some of her close friends. “They’re using this language which is, in my mind, coercion,” she says. “They make it psychologically painful for you to leave.”

Worse, Gray says, the research shows that most people don’t even know they’re being manipulated. But according to one study, he says, “when people were primed ahead of time with language to show what manipulation looked like, twice as many users could identify these dark patterns.” At least there’s some hope that greater awareness can give users back some of their control.


Do you use social media regularly? Take our short survey.

More Great WIRED Stories