Saturday, December 17, 2016

Lies, Damn Lies, and Statistics

It feels sort of fitting to kick off a blog aimed at improving science literacy with a post on fake news. There have been many, many articles that have come out recently analyzing what’s fake, what’s not, and how you tell the difference, but in some ways, it’s possibly too little, too late. Buzzfeed’s Adam Silverman determined that between August, 2016 and the November election, the top 20 fake news stories on social media had almost 1.5 million more Facebook “engagements” (likes, shares, etc) than did the top 20 legitimate news sources.1 It’s difficult to look at statistics like that and not believe that fake news on social media played some role in influencing voters in this election cycle. In the past few weeks, this phenomenon has garnered even more attention and outrage after a man was arrested after shooting a gun inside a Washington, DC pizza restaurant that had been falsely been linked to a child pornography ring involving Hillary Clinton.2 Fake news, however, is not a new thing, and popped up throughout the Obama administration, and sponsored content, or native advertising, has been a favorite form of online advertisement for a large portion of the last decade. Most people would like to believe that they are capable of distinguishing between real and fake news, indeed that perhaps at times that we are the only person capable of this distinction, but in truth, we as a society are hugely vulnerable when it comes to get played by the media.

A study by Stanford’s Sam Wineburg and colleagues has been getting a lot of press lately, bringing to the forefront that difficulties with “civic online reasoning” are prevalent in school aged populations and young adults. Eighty percent of middle school students were unable to identify labeled sponsored content on Slate’s website as being an advertisement. Only 25% of high school students were able to identify that verified accounts on Twitter were more likely to contain real news than similar looking accounts, and 30% of students actually found the fake news more compelling due solely to graphic elements on the page. College students were easily swayed by links to reputable news sources and, again, how polished a page looked, but they rarely followed those links to determine the accuracy of statements.3  We likely want to believe that as we get further into adulthood, we become better at discerning real news and fake, and this is definitely true- to an extent. Note that in the Stanford study, the tasks for each age group increased in complexity, because our reasoning skills, online or not, do improve to a point as we age, and we are capable of performing more cognitively complicated tasks. That being said, online survey results of 3,015 US adults revealed that overall, people believed fake news headlines about 75% of the time. While respondents indicated remembering being exposed to the real headlines more often than they remembered being exposed to the fake headlines (11-57% for real, 10-22% for fake), the evaluation on veracity of the two categories was similar for those who remembered seeing the headline.4

These statistics suggest that we as a population are hugely vulnerable to the phenomenon of fake news, and I’ll propose that this is due to a combination of social psychology, cognitive psychology, and computer science. I’ll start with the last, because that’s probably the easiest to unpack. The previously cited study had 23% of respondents saying that Facebook was one of their major sources of news, which sounds low, but it was the third highest source, following CNN and Fox news, both at 27%. There are two important points here: The first is that social media is almost as used as two of the mainstream powerhouse news sources, and the second is that people consider social media to be a source of news, not simply a news aggregator. Facebook in particular has been implicated in the seeming epidemic of fake news surrounding the last election cycle, so it’s important to look at what, exactly, Facebook is doing. Facebook uses very complex algorithms that I’m not even going to pretend I understand when determining what appears on an individual’s news feed. These take into account things you posted, things you’ve clicked on before, things you’ve liked, things you’ve shared, your interests, on and on and on. Basically, they’re attempting to provide an experience tailored to you, as opposed to an experience based solely on what your friends are reading and sharing, which is what we think we’re signing up for. These algorithms don’t just apply to your news feed, though, they also apply to the stories that show up as “Trending Stories”, suggesting to the user that these are widespread, mainstream stories. They then interact with the psychological factors to create a perfect storm of vulnerability.

People have a tendency to surround themselves with like minded individuals. This more than likely isn’t news to anyone. When your friends on Facebook all hold similar views to you, coupled with Facebook’s propensity to show you stories they think you want to see, you’re getting bombarded with information that you already believe, and you’ve got a recipe for classic confirmation bias.5  We have a tendency to believe in and remember things that confirm our biases and those things we already believe. That’s all well and good, but the previously cited Ipsos survey had 96% of Donald Trump supporters believing a story about Trump sending a private plane to rescue 200 Marines, as did 68% of Hillary Clinton voters. This story was false, and while confirmation bias leads it to make perfect sense that Trump supporters would be likely to believe this story that paints him in a positive light, it doesn’t account for a significant majority of Clinton supports believing a story that flies in the face of their rhetoric. So there has to be more.

Unlike what the movie Lucy and long standing myths would have you believe, we have limited cognitive resources, which results in having to prioritize some information over others. With the huge amount of information you’re bombarded with on the Internet, we have to sort through it quickly. Reading digital sources looks very different from traditional print sources, with people more likely to scan instead of read in depth, look for keywords, and jump around when they read.6  Since often relatively few cognitive resources are being deployed during digital reading, we use heuristics, or patterns that we recognize, to determine whether stories are credible or not, particularly when we feel like the topic at hand isn’t all that important. These heuristic cues include things like the length of an article, citing experts (or “experts”), the presence of hyperlinks, and how many other people believe it.7 Even when we do judge the topic as being important, these heuristics are still at play, biasing more in depth processing.8 These are all qualities that can be easily exploited by the creators of fake news, allowing them to intentionally manufacture content that they know will seem more credible and be likely to be believed.

One of the big factors in credibility heuristics is what’s called the “bandwagon effect”. Essentially, you’re more likely to believe a story that is popular or believed by others; they’ve done all the hard work of evaluating the source for you.9 This is a hugely relevant phenomenon to social media, where it is easy to see how many people have read, recommended, or highly rated a story. Not only is news more likely to be found credible if more people have read it, but people are more likely to read it in the first place.10 Multiple studies have found that explicit recommendations, such as high ratings, “likes” on Facebook, “upvotes” on Reddit, or “diggs” on, well, Digg, do play a larger role in our estimations of credibility than simply number of views, but it becomes very easy to see how the power of a group can drive a story to be read, found credible, and shared so that it is now backed by the recommendation of an even larger number of people, creating a snowball effect of perceived credibility.11  This is particularly true if we deem the sharer of the news as being like us, so in a place like Facebook, where your friends tend to hold similar values, or within a single subreddit on Reddit, where users are bound together by a common interest (and often common demographic variables), the bandwagon effect gets even more amplified.12

Obviously, sometimes we are presented with stories and topics that we deem to be important, and in that situation we rely less on all of these heuristics and give an article a close read. Here, cognitive psychology and the way memory and information processing work play a huge role. It seems as though all information we read, if encoded into memory at all, in encoded equally. Reading and encoding into memory both demand cognitive resources, and we typically don’t appear to spare the resources to evaluate the veracity of a statement before storing it. This means that even if a statement conflicts with our previous knowledge, which should make us wary, it’s treated very similarly to true statements from a memory perspective.13 After a memory is encoded, there is a recency effect where people are more likely to use memories that were more recently encoded.14  In the case of inaccurate information that conflicts with older information, this would translate to people favoring utilizing the new, false statements, even if they know better. When these memories are consistently recalled, it helps to reinforce them, and so, after time, they become integrated into our belief systems. After two weeks, the persuasiveness of false information tends to increase, and doubts due to conflicting information that may be present immediately after reading tend to decrease.15

Both the new, false information and older, accurate information are still present in memory and are integrated into a single belief. The brain doesn’t typically overwrite learned information or memories; rather, it just learns an additional pairing of a cue and specific information. For example, an accurate, known statement could be that the sun sets in the west. The cue question “Where does the sun set?” would call up general knowledge to answer “the west”. If you read an article talking about the sun setting in the north, that would create a new answer to the cue question. If I were to ask “Where does the sun set?” it would call up one answer saying “the west” and one saying “the north”. Even if you debunk this statement, it can still be called up during a retrieval process, which can help to reinforce the false belief.16 And just like that, thanks to the way memory works, false news becomes a thing we believe.