Q&A: Facebook Uproar Exposes Concerns Over Corporate Experiments

Free social media isn't free, an expert on digital commentary says.

A woman looks at the Facebook website in Munich, Germany, on February 2, 2012. Whether her "news feed" contains more positive or negative posts may affect her mood, according to a controversial study.

Revelations about a secret mood manipulation experiment that Facebook conducted on nearly 700,000 social users created a lot of bad feelings this week.

The June 17 Proceedings of the National Academy of Sciences study, led by Facebook data scientist Adam Kramer, described altering users' exposure to positive or negative comments from friends for a week in 2012.

The results suggested that positive and negative emotions are contagious, in a small way, across social media. But negative feelings were the ones spreading widely with news of the study, as ethicists like Art Caplan of New York University's Langone Medical Center raised questions about its honesty.

The University of Wisconsin, Madison's Dietram Scheufele knows a thing about the power of emotions in the digital media. The professor of science communication has reported on a psychological "nasty effect" of reader comments on online news stories. His team showed that negative comments make people dislike the subject of otherwise neutral-toned news reports, while positive ones skewed them the other way.

Scheufele tells National Geographic in an email interview that the recently disclosed Facebook study points to a need for the public to know more about the world of corporate experiments and raises questions about study ethics in the era of social media.

Is there a larger meaning in the uproar over the Facebook experiment?

We've talked for a long time now about technology moving faster than regulation, but typically, we're talking about this with regards to the natural or biological sciences.

I think this [Facebook] study shows nicely that the problem might be even more urgent for the social sciences, where the realities of what is possible with big data collections and analysis have so far outpaced any legislative mandate to protect human subjects.

The latter will always play catch-up with the former. The only question is how far we're willing to fall behind. The Facebook study shows that we have clearly fallen behind way too far.

So what should the public be asking now?

What should corporations be allowed to do internally with the data they collect, and what should be acceptable in terms of published research? Realistically, there will always be a lot more latitude in terms of what corporations can do than what is acceptable in academic research.

We as consumers have gotten so used to having access to online tools and services that we don't realize anymore that things like email, messenger, photo storage, music storage, and other social networking and cloud services shouldn't be free. They're expensive to provide and—technically—they should all come with a price tag attached.

The fact that we as consumers do not see the price tag doesn't mean it doesn't exist, which means that we end up being the product that's [making money] through the terabytes and terabytes of data we provide every day by having our emails, photo tags, messages, and clicks analyzed by corporations. All that data will be collected and analyzed to increase profits and build better products. And that won't change until we're willing to pay.

So what does that mean for "manipulating" the emotional content of someone's Facebook feed?

If we see this as an issue of corporate data collection, it raises two questions: A, do the benefits of the study outweigh the consumer backlash in terms of Facebook's bottom line? And if they do for corporations like Facebook, we may see more of these studies.

And B, should corporations be allowed to experimentally assign people to conditions that might do psychological harm?

What if a severely depressed person ends up committing suicide because they ended up in the negative emotion condition? The parallel would be GM trying out different qualities of airbags in different cars that they sell to consumers to see what the fatality rates, lawsuits, etc., are in each condition.

What are the differences between academic and corporate research?

For published research, the standards should be completely different, of course. We're seeing a widening gap between academics—who often don't have access to these data or to manipulations like the ones administered in the Facebook study—and corporations, who can create these data themselves, as the Facebook study shows.

This puts academics at an inherent disadvantage, one that is exacerbated by the fact that academic [review panels] would never approve this study in the way it seems to have been conducted: without debriefing, various other safeguards, and so on.

A notable exception is Twitter, which recently had a request for proposals for academics to apply for access to their complete data. In other words, they have been pretty open in making at least raw tweets available to academic researchers.

What's the solution?

The solution, in my opinion, are public-private partnerships. If Facebook, Twitter, or other firms partnered with academic scholars in communication, psychology, political science, sociology, and so on, they would immediately capitalize on the infrastructures that are in place in terms of ethics boards and other oversight.

They would also "buy" valuable expertise in terms of answering complex and important research questions without raising ethical concerns. As Susan Fiske, the editor who oversaw submission of the Facebook study to the journal, noted, this is an important topic. [Fiske said she was "creeped out" by the study in comments to the Atlantic.]

I completely agree. Given what is at stake for society, I hope we can build partnerships that not only help us do the research necessary to understand a rapidly changing technology landscape, but also moves forward in a responsible fashion.

Follow Dan Vergano on Twitter.