Here’s Why Facebook Is Ending Election Day Experiments

Source: Thinkstock

Source: Thinkstock

As Election Day gives Americans the chance to choose their senators, congressmen, and governors and vote on ballot initiatives, the most popular social network in the country will do its part to nudge its users to vote. Facebook has finished fine-tuning the tool that shows a prominent “I’m Voting” or “I’m a Voter” button to get users to head to the polls, and as Micah L. Sifry reports for Mother Jones, a few million more of Facebook’s 150 million American users are expected to vote at least partly because Facebook encouraged them, with a little help from their friends. But Facebook’s efforts to promote voting this year will be much narrower than in previous years, and much of that has to do with the politics and public perception of how the social network conducts its research.

The company calls its tool to promote voting during election season the “voter megaphone,” and as Sifry notes, has left users in the dark about the process of the tool’s development. Facebook has been quietly conducting experiments to see how it can influence the voting behavior of its users, studying how the News Feed can affect users’ interest in politics and the likelihood that they’ll vote.

An experiment conducted during the three months prior to the 2012 Election Day increased the number of hard news stories in the News Feeds of 1.9 million Facebook users. A Facebook data scientist reported that the change, unannounced to users, measurably increased civic engagement and voter turnout.

That experiment was just the latest addition to a series of research experiments that Facebook has undertaken since 2008, with the goal of increasing Americans’ participation in elections. Sifry reports that since 2008, Facebook has offered users an easy way to proclaim to their friends that they were voting. The company’s data scientists have experimented with ways to surface that information in the News Feed to increase voter turnout.

In 2010, they placed different versions of the “I’m Voting” button on the pages of 60 million American users to test the impact of each one and figure out how to optimize the button. As Sifry explains, two groups of 600,000 users served as a control — one saw the “I’m Voting” button but didn’t get information about their friends’ behavior, while the other didn’t see any content related to voting.

The resulting paper (PDF), published two years later in Nature magazine under the title “A 61-million-person experiment in social influence and political mobilization,” cited an “increasing interest in the ability to use online social networks to study and influence real-world behaviour” among researchers.

The paper noted that it was “an open question” at the time whether “online networks, which harness social information from face-to-face networks, can be used effectively to increase the likelihood of behaviour change and social contagion.” Facebook has since proven that social networks can influence their users’ behavior. The 2010 study found that the “I Voted” button was a compelling way to effect social pressure to get people to vote.

The paper noted that “the Facebook social message increased turnout directly by about 60,000 voters and indirectly through social contagion by another 280,000 voters, for a total of 340,000 additional votes. That represents about 0.14% of the voting age population of about 236 million in 2010.”

While Facebook’s researchers reported that the study had important implications on the understanding of how online political mobilization works — showing that social mobilization is more effective than simple informational mobilization — they also noted the broader importance of social influence for effecting change in user behavior. They found that “Showing familiar faces to users can dramatically improve the effectiveness of a mobilization message,” and concluded that:

Online messages might influence a variety of offline behaviours, and this has implications for our understanding of the role of online social media in society. Experiments are expensive and have limited external validity, but the growing availability of cheap and large-scale online social network data means that these experiments can be easily conducted in the field. If we want to truly understand—and improve—our society, wellbeing and the world around us, it will be important to use these methods to identify which real world behaviours are amenable to online interventions.

While researchers’ intentions with the 2010 Election Day experiment and others like it (such as the 2012 experiments that reportedly tested if the location of or different messages on the “I’m Voting” button influenced the likelihood of users interacting with it) may have been intended to help Facebook “understand and improve our society, wellbeing, and the world around us,” the company seems to have lost confidence that intent can shield it from bad press and backlash from users over future social experiments.

A Facebook spokesman told VentureBeat that the company has chosen to stop some experiments, including the large-scale voter megaphone study, even as the company has yet to disclose the full details of its experiments with users’ feeds surrounding the 2012 Election Day. According to Mother Jones, the 2012 studies include an experiment by Facebook’s Solomon Messing to boost the prominence of news stories in the feeds of 1.9 million users. Voter turnout among that group rose from 64% to more than 67%.

Michael Buckley, Facebook’s vice president for global business communications, told Sifry that the public won’t receive full details until 2015, when academic reports on the experiments are published. And as Sifry notes, most users have little idea of how the company constantly changes and experiments on the algorithm that populates the News Feed. Tension over Facebook’s choice not to disclose the experiments it conducts on its users rose to a boiling point over the summer, when a team of academic researchers and Facebook data scientists revealed that they had altered the emotional content of 700,000 users’ News Feeds, to determine whether emotions spread contagiously over a social network.

While the impact of the emotional contagion experiment was limited, it crossed an ethical line in many users’ minds. Many were angry that Facebook would manipulate users’ emotions, and their distrust of the company deepened. But as Sifry sees it, users’ distrust of the social network is only one reason for the social network to be reticent to disclose exactly what’s gone on in its voting promotion experiments.

Because Facebook’s user base skews Democratic, the voter megaphone likely pushed more Democrats to the polls than Republicans. That news would not sit well with Republicans on Capitol Hill, and fear of potential backlash could be a reason that Facebook is choosing to hold its cards close — even as the planned deployment of the voter megaphone to the pages of users in all major democracies holding national elections this year makes the need for transparency even greater.

Buckley tells Sifry that for 2014’s Election Day, Facebook will be deploying the voter megaphone without any research experiments attached. Almost every American Facebook user over the age of 18 will see the “I Voted” button, but that button reportedly won’t be part of a further effort to manipulate the voting promotion process, or measure how the social network affects voter turnout among its users.

But to VentureBeat’s Gregory Ferenstein, Facebook’s voter megaphone research is the “most important academic study on the value of social media in democracy to date,” and research that shouldn’t be stopped by Facebook’s fear of the press or political backlash. (Afterall, as he notes, the 2010 experiments met with a moderate positive response, and little criticism.)

Facebook’s ability to boost voter turnout by 2.2% sounds statistically small, but is significant by the standards of elections that are won at the margins. Ferenstein writes that campaigns spend millions of dollars to get “a few thousand” more voters to the polls, and the proven efficacy of Facebook’s methods of getting users to the polls is unmatched by traditional methods of boosting turnout.

For the upcoming 2016 election, Facebook is stopping its experiments. All Facebook users will see the same button — or no button at all — and Facebook won’t measure how many users it encouraged to vote or whether some types of messaging are more effective than others. To Ferenstein, that means that Facebook is bowing to political pressure to stop innovating, a decision that he characterizes as disappointing, and a concession to critics who seek to control the social network’s decisions.

Two potent forces seem to be at play in the public’s (and the press’s) perceptions of Facebook’s research experiments: The ethics of the studies versus the societal good that they might effect. The positive effect of the voter megaphone studies is, as Ferenstein demonstrates, a straightforward argument to make. Most people would agree that the more (informed) voters who get to the polls, the better. While Facebook’s manipulation of users’ feeds and therefore their behavior seems intrusive, it doesn’t harm anyone and instead has a positive effect on people’s civic engagement.

But the ethics of research like the emotional manipulation study are less clear. While Facebook relies on its privacy policy and terms of use as a shaky mode of getting users’ informed consent for the research that its data scientists complete, users are still disconcerted to learn that the News Feed algorithm is doing more than trying to show them the most engaging updates, photos, and news stories. Beyond that, the News Feed has become the vehicle by which Facebook seeks to influence users’ emotions and behavior.

The immediate societal benefits of social science researchers gaining a better understanding of how emotions spread via a social network are less obvious than the benefits of getting more young users to vote. But when it comes down to principles, can one study really be more ethical than the other? Both are designed to affect behavior, and in so doing, quantify Facebook’s ability to affect behavior. Conversely, should the ethics of an experiment be measured by its potential to harm users? If it doesn’t negatively affect users, even in the absence of a positive effect on them, the society of which they’re a part, is it okay?

As Tech Cheat Sheet reported recently, Facebook announced that it was implementing new guidelines and policies governing the research that the company’s data scientists conduct on its users. But the social network’s lack of transparency on exactly what the new guidelines required, and what review processes new experiments will undergo, were not enough to renew users’ trust or give them any confidence that Facebook had improved the way its research methods measure up against the ethical yardstick of the regulations that federally funded or academic research must follow.

But because Facebook is a private company, it’s technically just running A/B tests on its users without their knowledge — and without the oversight of an outside regulatory body. However, the problem remains that most tech companies don’t purport to run commercial A/B tests as psychological or sociological research, and instead choose to keep their tests private.

Facebook, on the other hand, attempts to combine the best of both worlds, looking for the public credibility of academic research with the unregulated freedom to design experiments like private A/B tests. Would it be better for Facebook to conduct experiments on its users in silence, forgoing the publication of its social science research and leaving users in the dark as to whether their News Feeds are being manipulated in ways that go beyond the usual changes to the algorithm? That would be a hard sell not only for users, but also for researchers who see the potential positive effects that Facebook’s research can have.

Facebook’s concession to critics as it dramatically narrows the scope of the voter megaphone program could be regarded not only as a loss for social science, but as a step in the wrong direction. Facebook would receive better reception from the public and from the press if it made its research processes more transparent, disclosing studies earlier and with more regard for the influence that experiments have on users’ experience with the social network.

Instead, the social network is keeping everyone in the dark about the data it collects and analyzes — “the largest field study in the history of the world,” as Facebook data scientist and chief author of the emotional contagion study, Adam Kramer, characterized it in a 2012 interview on Facebook’s website.  But while Facebook does possess one of the largest sets of behavioral and social data ever assembled, all of its data science team’s findings are limited to the activity that occurs on Facebook, and the company’s motivations limit its research’s utility, too.

The goals of the data science team are ostensibly to figure out how to provide a better service to the social network’s members, and we have to question what Facebook is actually contributing when it occasionally publishes its research for the benefit of social scientists.

Instead of assuming cynically that Facebook’s research experiments are either designed for the manipulation of users or naively that its algorithm experiments are meant to make the world a better place, the public and the press need to be more intelligently critical of the studies that Facebook conducts. Why is Facebook conducting them? Which outcomes can be generalized? Which are limited to Facebook’s own engineered environment, and what knowledge is the company contributing to our broader understanding of psychology and social science?

None of that can happen if Facebook withdraws its research from the public view in fear of another PR disaster, or if the company continues to hold its commercial research apart from the ethics of academic research.

More From Tech Cheat Sheet: