Download lay summary:Why-do-people-spread-false-information-online-Video-Lay-Summary-1.docx
Here at The Collaborative Library, our main aim is to share ‘bite-sized’, reliable summaries of scientific research as widely as possible mostly to counteract problems with the spreading of false information or ‘disinformation’ online.
Why is disinformation such a problem? Well, according to Professor Tom Buchanan from the University of Westminster UK, although it’s part of our everyday life these days, the creation or manipulation of information has potential to deceive or mislead people, tending to make them think in extreme ways. This is done to cause harm sometimes for political, personal, or financial gain, where the former is viewed as a threat to democracy as it badly affects trust and undermines civil society. For example, Tom and others talk about how groups of people have done things like attacked telecommunications masts because they believed 5G towers were causing coronavirus after seeing/hearing a series of false stories about this.
How does false information spread? Usually, it’s by ordinary people deliberately sharing or ‘liking’ posts on social media platforms like Twitter, Facebook, and Instagram, which can spread across the web rapidly and go viral. To give you some idea, back in 2015 and 2017 the Internet Research Agency disinformation group found that over 30 million users shared dummy Facebook and Instagram disinformation posts created by the Internet Research Agency with their families and friends.
Why do we do this? So, one idea is that most of us are on ‘auto-pilot’ when using social media and just like and re-share things in quite an absent-minded way, and it’s a bit more spontaneous rather than a planned thing.
Researchers think there are three main things that may contribute to this:
1. Consistency: this is the sharing or liking of posts containing content that is consistent with our past behaviour or prior views and attitudes. Tom gives the example that people who vote Republican are more likely to post or ‘like’ right-wing views.
2. Consensus: this is basically the idea that people believe others share the same view about the information shared and so they re-post it. It follows that something that has been shared widely by many people is viewed as having greater credibility, which by the way is a tried and tested mass marketing tactic called ‘social proofing’ often seen in the form of reviews or sales rankings on websites like Amazon – pretty clever, eh? Bots, which are computer software programmes that do repeated tasks, also spread disinformation online by repeatedly re-tweeting posts to get a high re-tweet count, leaving others to believe this must mean there is shared agreement across many people, which unfortunately leads to further sharing. However, consensus could also be used to boost helpful and accurate messages to counter disinformation too.
3. Authority: this is about posts or messages containing disinformation seemingly coming from a credible person or trustworthy source, where just like consistency and consensus, has been shown in other online experiments.
The idea is that if you put all three things together, you’ll get disinformation posts that will dramatically increase reach and possibly go viral.
Who are the spreaders? It’s not everybody. Only around 10% of people according to other studies, which although a minority is still a relatively large number of people. Being able to tell these people apart from others offers some hope for the future that they can be intercepted with counter messages earlier on in their potentially harmful posting ‘campaign’. Evidence shows that it’s mostly older people, partly due to their poorer ability to interact with textual, sound, image, video, and social medias… or find, manipulate, and use such information”, referred to as a lack of ‘digital literacy’. Sometimes it’s done by people unwittingly, or when they have been tricked. However, at the time of writing, Tom highlights there is currently limited evidence to show that education programs improve people’s digital literacy. Which brings us to another important point: that it’s possible some people know what they’re seeing is disinformation, yet they decide to spread it anyway, which may render digital literacy education programmes useless. Personality-wise, again just like advertising campaigns, it seems to be people with greater agreeableness who post or re-post disinformation, while greater conscientiousness to posting political messages could also reflect people who are more cautious and pay more attention to detail.
Across four different online experiments including 2634 people, Tom and his colleagues tested out if their likelihood of re-posting on social media was related to levels of consistency, consensus, authority, but also their pre-existing views/beliefs and level of digital literacy. All studies were done in almost exactly the same way based on mock-ups of Facebook, Twitter and Instagram platforms, where the Facebook experiment was repeated in the US to explore any cultural differences compared with people’s posting behaviours in the UK. Participants initially gave their basic demographic information, but also completed questionnaires to assess their personality, political orientation, and digital literacy. They were then asked to rate three genuine disinformation posts and their likelihood of sharing them after viewing three stories (or screenshots) taken from websites like Infowars.com, where posts are renown for promoting right-wing ideas and ‘fake news’. To put a name to a face here, Infowars were successfully sued recently by the families of victims for making false claims that the Sandy Hook shooting in the US was a hoax. The three posts and stories were shown to each participant in different random combinations after being told that… “a friend of yours recently shared this on Facebook/Twitter/Instagram, commenting that they thought it was important asking all their friends to share it”. The posts were altered to change the level of consistency, consensus, and authority (see the original article for more information about this). Participants were then asked to rate how accurate or truthful they felt the posts were, and how likely it is was that the person had seen the post before, but also if they had they have ever shared a political news story online that they later discovered was made up, or one when they knew at the time it was false.
Overall, Tom’s team showed that consensus, authoritativeness, and for the most part media literacy, were unrelated to false information spreading via re-posting/sharing, liking, and re-tweeting posts or stories on Facebook, Twitter, and Instagram. Having more ‘right wing’ political views was related to greater likelihood of sharing and liking false information on Facebook and Twitter (but not Instagram), which may also mean that pre-existing beliefs are important, where more intentional sharing of false political information was by men. Curiously though, conservatism was mostly unrelated to past sharing of disinformation. Believing a story to be true, and the likelihood of seeing them before online (or familiarity), were the strongest predictors of the likelihood of someone sharing false information on Facebook, Twitter, and Instagram. Having a lower level of education was mixed, while being a man seemed to be an important contributor to sharing disinformation, which for the most part was accidental sharing of disinformation in the past. Interestingly though, greater digital literacy was related to a deliberate attempt to share disinformation and connected to higher levels of conservatism. The age of participants and their personality traits, including agreeableness, conscientiousness, neuroticism – so that’s having tendency to expression more negativity – and extroversion –being more sociable, talkative, assertive, and excitable – showed mixed findings across the four studies in terms of understanding the likelihood of sharing/liking posts or re-tweets, or past sharing, of disinformation. There were also mixed results for reports for past sharing behaviour overall across the four studies.
Challenges with the study were that all questionnaires were self-report, which may mean that people give socially desirable responses. Plus, the results across the studies were inconsistent and problems with the data itself may have contributed to mixed findings (see the quality assessment checklist below for more information).
Why is it important? Well, a bit like putting out a fire before it tears through a building and causes significant damage, Tom’s study has taken an important step in helping us to begin to disentangle some of the things that may be important drivers in false information sharing online. However, more work needs to be done to clarify what these drivers are, which may help us to target disinformation in the future with things like counter-messaging campaigns to reduce or contain their negative impact, to ‘extinguish’ or manage the fire before it grows and spreads and causes unnecessary harm.
“Cell tower and 5G text on a cloudy white sky” by Ivan Radic is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“Money” by thethreesisters is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“Ballot box” by FutUndBeidl is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“FACEBOOK LIKE” by owenwbrown is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“Twitter” by eldh is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“Instagraman via > www.petapixel.com/2012/10/26/a-instagram-camera-halloween-costume-that-actually-takes-pictures/” by Takeshi Life Goes On is licensed under CC BY-SA 2.0.
“Republican Elephant – 3D Icon” by DonkeyHotey is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“Trashed ‘bot.” by The Magic Tuba Pixie is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“two older men discussing” by Kevin Lütolf is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“Older men” by Jeena Paradies is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
“2019-01-01 InfoWars uses my E Warren” by mdfriendofhillary is licensed under CC BY-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/2.0/?ref=openverse.
“File:Grenfell Tower fire (wider view).jpg” by Natalie Oxford is licensed under CC BY 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0?ref=openverse.
“US Flag” by jnn1776 is licensed under CC BY-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/2.0/?ref=openverse.
“uk flag” by twicepix is licensed under CC BY-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/2.0/?ref=openverse.
YOUR LAY SUMMARY INFORMATION
|Title of lay summary
|Why do people spread false information online? Video Lay Summary
|Lay Summary Author(s)
|Authors Affiliation(s) / participating organisation()s
|Dr. Anja Harrison
|Science Area Subject
|Key Search Words
|Other relevant Collaborative Library lay summary links
|What is the licence for your lay summary?
|Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (for all other options selected above)
ORIGINAL E-PRINT INFORMATION
|If a pre-print or post-print, please provide a direct weblink or Digital Object Identifier(s) (DOI)):
|Provide the full weblink DOI of the published scientific article:
|Are there any other open-access data weblink(s) that might be helpful (e.g., for relevant data repositories see fairsharing.org):
|Has this work been applied in ‘real-life’ settings (e.g., local service evaluation projects)? If so, add any relevant weblink(s) here:
|Title of the original peer-reviewed published article:
|Issue (if applicable):
|Page numbers (if applicable):
|Year of publication:
|Contributors and funders:
No conflict of interest reported
|Original Article language:
|True experimental study (quantitative outcomes/units, e.g., temperature)
|What licence permission does the original e-print have? For more information on this please see our permissions video):
|Attribution 4.0 International (CC BY 4.0)