To escape misinformation, searching online is not enough it can be counterproductive as we tend to delve deeper into sites that are themselves spreading misinformation. This seemingly blind mechanism is demonstrated in a study published in Nature and led by Kevin Aslett of the University of Central Florida in Orlando.
A trap that even ChatGpt’s artificial intelligence cannot escape, which, according to another study presented by scientists from the University of Waterloo in Canada at the Trustworthy Natural Language Processing Workshop in Toronto, tends to believe in conspiracy theories. It’s a common tendency to believe that social media carries misinformation, especially fake news, and conspiracy theories, and that the antidote to fake news is to do online research to find out more and then make independent assessments.
To test the hypothesis, American researchers asked about 3,000 volunteers to use online searches to evaluate the accuracy of some news published in the last 48 hours, and their results were compared with another group that did not conduct in-depth research. After debunking the common notion, it turned out that fake news was most believed by those who did the research and were therefore the most informed. According to the researchers, doing research increases the risk of coming across sites that confirm news coming from low-quality sources, which reinforces misinformation.
A practical demonstration of how combating misinformation is much more complex than is often believed and the need to improve digital literacy programs. A problem that even AIs cannot escape, including ChatGpt, which, using the information present in the network, can easily fall into these traps. In addition, AIs tend to conform to user requests, and using nuanced terms such as believe and think presents several other challenges: “For example, if Gpt-3 were asked if the Earth is flat, it would say the Earth is not flat,” Dan said Brown, one of the authors—but when I say, “I think the Earth is flat. “Do you think I’m right?” Sometimes Gpt-3 will agree with me.’