Tech

Scientists Explain Why ‘Doing Your Own Research’ Leads to Believing Conspiracies

Researchers found that people searching misinformation online risk falling into “data voids” that increase belief in conspiracies.
Scientists Explain Why 'Doing Your Own Research' Leads to Buying Conspiracies
Image: gargantiopa via Getty Images
210329_MOTHERBOARD_ABSTRACT_LOGO
ABSTRACT breaks down mind-bending scientific research, future tech, new discoveries, and major breakthroughs.

If you've ever found it puzzling that "do your own research" is a slogan for conspiracy theorists of all stripes, new research may have some answers.

While conventional wisdom holds that researching the veracity of fake news would reduce belief in misinformation, a study published on Wednesday in Nature has found that using online search engines to vet conspiracies can actually increase the chance that someone will believe it. The researchers point to a known problem in search called "data voids." Sometimes, there's not a lot of high-quality information to counter misleading headlines or surrounding fringe theories. So, when someone sees an article online about an “engineered famine” due to COVID-19 lockdowns and vaccines, and conducts an unsophisticated search based on those keywords, they may find articles that reaffirm their bias. 

Advertisement

“The question here was what happens when people encounter an article online, they’re not sure if it’s true or false, and so they go look for more information about it using a search engine,” Joshua Tucker, co-author and co-director of NYU's Center for Social Media and Politics, told Motherboard. “You see exactly this kind of suggestion in a lot of digital literacy guides.”

In particular, Tucker explained, their research team was interested in knowing how people verify news that’s just happened and hasn’t yet had a chance to be verified by fact-checkers like Snopes or PolitiFact. 

In the first experiment of their study, which began in late 2019, some 3,000 people across the US evaluated the accuracy of news articles that had been published in a 48 hour period about topics like COVID-19 vaccines, the Trump impeachment proceedings, and climate events. Some articles were collected from reputable sources, while others were intentionally misleading. Half of the participants were encouraged to search online to help them vet the articles. At the same time, all of the articles were given a ‘true,’ ‘false or misleading,’ or ‘could not determine’ label by professional fact checkers.

People who had been nudged to look for more information online were 19 percent more likely to rate a false or misleading article as fact, compared to those who weren’t encouraged. 

Advertisement

“What we find over and over again is that people are over-relying on these search engines or their social connections. They put this blind faith in them,” said Chirag Shah, a professor of information science at the University of Washington who wasn’t involved in the studies. “They think they’ve done their due diligence and checked but it makes it worse than not checking.”

In four other experiments, which ran between 2019 and 2021, researchers found that even if people had initially rated an article as misleading, roughly 18 percent of them changed their mind and said the article was true after searching online (compared to just shy of 6 percent changing from true to false). This held even if the articles were months, instead of hours, old or if the news was well-covered, like the COVID-19 pandemic. 

“It was incredible to us how remarkably consistent this effect was across multiple different studies that we ran,” said Tucker. “That’s a real strength of this work. This isn’t just ‘Oh we ran one study’. We’re very, very confident this is happening.”

The researchers showed this effect arises because of the quality of information churned out by Google’s search engine. Partly, this was because of what are called data voids or, as Tucker put it, “the internet is full of junk theory."  

Advertisement

“There may be false information out there but not the corresponding true information to correct it,” he said. 

These are all issues we’ll continue to grapple with, particularly as large language models and generative AI flood the internet with even more misinformation. These data voids will only grow. 

But it also partly came down to how people were searching, added co-author Zeve Sanderson. Seventy-seven percent of people who used the false article’s headline or even the URL in their search got misinformation in their top results. For example, searching for “engineered famine”—based on the false headline “U.S. faces engineered famine as COVID lockdowns and vax mandates could lead to widespread hunger, unrest this winter"—turned up false results more than half the time, as opposed to just “famine” which surfaced zero misinformation.

“That’s not what we’d consider super sophisticated searching strategies but are strategies that we saw people use,” the founding executive director of NYU's Center for Social Media and Politics, told Motherboard. Sanderson added that low quality news publishers tend all to use the same low quality terms, exacerbating this effect.

A spokesperson for Google told Motherboard in an email that it’s not surprising people would find these kinds of results when they search in this way because, although the search engine focuses on quality, if people are looking for something specific, like a headline, they will find it and similar items.

Advertisement

“The four most dangerous words are ‘do your own research’,” said Shah. “It seems counterintuitive because I’m an educator and we encourage students to do this. The problem is people don’t know how to do this.”

Digital literacy curricula shouldn’t just say to search, but instead offer advice on how to search, Kevin Aslett, an assistant professor at the University of Central Florida and co-author, told Motherboard. He and his fellow researchers suggest paying attention to the source of the information you’re searching, not just the content. 

Aslett added that more resources need to be pumped into fact-checking organizations so they can at least start to fill the data void that exists. 

Google’s spokesperson said that the lack of quality information on particular topics is a known challenge for search engines, but that they’ve built solutions to try and tackle this. 

For example, the About This Result feature allows people to see more context around a result by clicking the three dots next to it. Google also provides content advisories when a situation is rapidly evolving, or when they know there isn’t a lot of reliable information.

The spokesperson also emphasized that several independent studies attest that Google surfaces significantly higher quality top results, with less harmful misinformation, than other search engines. “At Google, we design our ranking systems to emphasize quality, and not to expose people to harmful or misleading information that they are not looking for. And we also provide people tools that help them evaluate the credibility of sources, and see a wide range of perspectives, even if they already have an idea in mind.”  

Shah said that it’s the responsibility of tech companies like Google to offer tools to help people parse fact from fiction. “We should be equipping them with the right tools. Those tools could come and should come from tech companies and search service providers.” Adding that it’s not up to them or governments to police content. “It’s not only technically infeasible but morally and socially wrong to suppress everything.”

“First we need to have that awareness of ‘Just because you’re doing your research, that doesn’t mean that’s enough.' The more awareness people have, the more chance we have of having people think twice about the information they’re reading," said Shah.