Data Policy Implicates Unethical Facebook Emotion Experiment

FACEBOOK

There is a new development in the debate over the legality and ethics of Facebook’s (NASDAQ:FB) News Feed experiment, which Wall St. Cheat Sheet reported on earlier this week. Compounding the (very certain) unethical nature of Facebook’s experiment, is a new discovery that makes it impossible to defend the research as strictly legal: Forbes’s Kashmir Hill reports that Facebook didn’t include the term “research” in the official data use policy that users agree to until four months after the experiment was completed.

A team of researchers, led by Facebook data scientist Adam Kramer, conducted a week-long experiment early in 2012 to manipulate the posts that appeared in the News Feeds of 689,003 Facebook users — none of whom were aware that they were part of an experiment. They altered the algorithm that chooses which statuses, photos, and activities to display so that it would show some users fewer posts with negative words, and show others fewer posts with positive words. The objective was to learn whether emotions could spread via social media, and the study demonstrated that they can, with users exposed to fewer negative words more likely to use positive language.

However, debate has sprung up around the publication of the study’s results, with critics saying that Facebook should have gotten users’ consent, and supporters pointing out that the News Feed algorithm and the content it curates is manipulated constantly. A post on Animal New York took the side of critics of the research, and characterized Facebook’s experimenting on users as the company “using us as lab rats.”

 

On the other hand, Kramer, in a Facebook post responding to the initial outcry, apologized for the fact that the published study didn’t explain the motivation behind the study (but conspicuously failed to apologize for the actual manipulation inherent to the research):

The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.”

Discussions of the legality and ethics of the experiment have so far hinged on the idea that Facebook is legally able to use account holders’ data and leverage its reach into their everyday lives as it chooses, as users agree to when they sign up for Facebook and review (or choose not to review) the site’s terms of service and its data use policy. However, Hill’s discovery undermines that argument. She points out that that argument relies on what Facebook’s data use policy says now, not what it said in 2012, when the experiment was conducted. Hill writes:

In January 2012, the policy did not say anything about users potentially being guinea pigs made to have a crappy day for science, nor that ‘research’ is something that might happen on the platform. Four months after this study happened, in May 2012, Facebook made changes to its data use policy, and that’s when it introduced this line about how it might use your information: ‘For internal operations, including troubleshooting, data analysis, testing, research and service improvement.’ Facebook helpfully posted a ‘red-line’ version of the new policy, contrasting it with the prior version from September 2011 — which did not mention anything about user information being used in ‘research.’

The terms of service and data use policy would allow Facebook to legally justify the research — even if no one actually reads them — because they form the legal terms to which users must agree in order to create a Facebook profile and use the site. However, Facebook didn’t even include the term “research” in its data policy until after the experiment, negating the policy’s validity for any possible defense of the research.

The Wall Street Journal also reports that there was no age filter applied to the study, meaning that users under 18, and possibly as young as 13, were included in the participant pool. From Kramer’s post, it’s unclear exactly what Facebook intended to do with the research. Would it be ethical to censor and curate the emotional content of users’ News Feed to get them to — well, not “avoid visiting Facebook”? And did Facebook really need to conduct such a study to confirm what ended up being pretty predictable results?

The News Feed algorithm typically has to choose among approximately 1,500 statuses, photos, and stories to display 300, so there’s already a lot of curation going on. But instead of making changes in response to users’ behavior, the study made changes intended to influence users’ behavior. And worse than manipulating their behavior was Facebook’s objective of manipulating their emotions, all in the name of “research” which users hadn’t in any manner consented to.

It’s already pretty clear that it wasn’t ethical to conduct the study without consent. Though of course, it’s still up for debate if the data use policy would have been sufficient consent in the first place, or if the researchers should have solicited direct, explicit consent to manipulate the News Feed to produce emotional effects.

The consideration is now a moot point, as Facebook did neither, and the legal and ethical debate will center on the study’s objective of experiment with unconsenting Facebook users’ emotions. And it’s that purpose of the study — to manipulate people’s emotions — that sees the research crossing a line from sites’ usual testing to encourage behaviors like buying a product. As Pam Dixon of the World Privacy Forum tells Forbes, “They actually did a test to see whether it would have a deleterious effect on their users. This isn’t A/B testing. They didn’t just want to change users’ behaviors, they wanted to change their moods.”

It’s unclear what negative effects the study had on participants because, as ZDNet explains, Facebook used a flawed tool called the Linguistic Inquiry Word Count to conduct the experiment. The LIWC 2007 tool used in the study was designed to study the emotional and cognitive components in verbal and written samples, but was intended for long samples, like books, papers, and therapy transcripts. ZDNet demonstrates how the tool fails to accurately characterize the underlying emotional tone of the short text of a Facebook status, and the attempt to manipulate users’ emotional statuses using an outdated tool seems a particularly ill-conceived way to investigate how social networks affect their users.

Even if Facebook were able to guarantee that the study didn’t cause harm to any of the participants who were shown primarily negative posts — users whom, as various critics quip, Facebook can only hope were not clinically depressed — the research wasn’t carried out in a way that showed any consideration for users’ wellbeing. “Facebook: unethical, untrustworthy, and now downright harmful,” reads the ZDNet headline, expressing a sentiment that’s not far off from many users’ reactions to the news.

The most important effect of the study isn’t that we now have (albeit questionable) proof that we’re affected by others emotions when we interact with them online. More significant than that is the revelation that Facebook sees nothing wrong, ethically or legally, with conducting psychological research without consent and without a means to measure the actual effect on users. A company spokesperson told Forbes:

“When someone signs up for Facebook, we’ve always asked permission to use their information to provide and enhance the services we offer. To suggest we conducted any corporate research without permission is complete fiction. Companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word ‘research’ or not.”

The dismissive nature of that defense just proves that the information we share with social networks can and most likely will be used for the company’s best interest, in some cases in ways that violates the ethical and even legal obligations that we’d like to convince ourselves the company has to its users.

More from Wall St. Cheat Sheet: