Looking out for fake news

news online

A fake letter discrediting a ruling politician and accusing him of high treason and betraying his country to foreign powers. You might be surprised to find that this is not a recent case: it is actually a story retold by Thucydides in which the Spartan general Pausanias was walled inside the temple of Athena in 468 BC, where he had taken refuge after having been convicted with the fabricated letter.

The definition of “fake news” only entered our dictionaries after 2016, but upon closer inspection, the phenomenon of fake news, or those that are not completely true, has been around since ancient times.  What has changed in recent years? The advent of social networks and the increasingly pervasive use of the internet through smartphones have boosted the speed and prevalence of the phenomenon.  Francesco Pierri is a researcher at the Department of Electronics, Information and Bioengineering of Politecnico di Milano, as well as a visiting researcher at the Information Sciences Institute of the University of Southern California and, formerly, at the Observatory on Social Media at Indiana University.  In 2021, his doctoral thesis analysed the effects of the spread of disinformation and misinformation on online social networks.  He has published many articles on this topic in journals and has taken part in various international conferences, while his research continues to evaluate the social impact of fake news. The researcher tells us how.

Francesco Pierri
What is your academic background and why did you choose this field of research?

«I hold a double Laurea Magistrale (equivalent to Master of Science) in Computer Engineering from the Politecnico di Torino and the Télécom Paris “Grande École”. I moved into the world of academia during my first research traineeship at École Polytechnique/INRIA, where I worked for 6 months in order to complete my (double) masters thesis. After my degree, I worked for another 6 months at IBM Research in Zurich as an Assistant Researcher, and in the meantime, I applied for a Ph.D. programme at the Politecnico di Milano. I won a ministry scholarship for the inter-departmental doctorate in “Data Analytics and Decision Sciences”, that is an “open” scholarship without a specific research topic. I therefore planned the objectives of my Ph.D. thesis together with my two supervisors, professors Stefano Ceri and Fabio Pammolli, choosing the problem of disinformation (“fake news”) on social networks as the subject of my research. The most stimulating aspect of my research is the constant need to study work from fields other than my own, such as political sciences, psychology, sociology etc., in addition to trying to assess the real-world impacts of that which happens in the virtual world».

Fake news is a phenomenon that has existed since ancient times. Why have you chosen to research those that are online?

«The problem of fake news has become particularly prominent in recent times due to the possibilities offered by the internet, and in particular by social media, of connecting millions, if not billions of individuals, from one side of the planet to the other, as well as allowing them to produce content on a large scale with ease. At the same time, we are witnessing a severe crisis for newspapers and traditional media which has caused alternative forms of information to proliferate (online). A combination of these phenomena together with the psycho-cognitive aspects of human beings means that we are now vulnerable and exposed to a range of potentially harmful content, which is often amplified by “malicious” agents in a coordinated manner, leading to negative and tangible consequences in the real world».

How do you analyse the vast amount of content that is continually posted?

«Since the American elections in 2016, social platforms have proven to be unequipped for managing problematic content on several fronts, including that of disinformation. The traditional approach consists of a combination of manual content analysis, which is entrusted to human moderators, and automated “machine learning” techniques (a branch of artificial intelligence). However, since the beginning of the COVID-19 pandemic, almost all of the leading platforms have begun to implement more aggressive and proactive strategies, from the compulsory removal of particularly problematic accounts and messages, which often causes bitter debates about freedom of speech, to “softer” moderation based on tags».

What repercussions for society and public opinion have you been able to observe?

«It is generally very complicated to demonstrate causal relationships between that which occurs on social media and events in the world, especially with the lack of access to the platforms’ data.  During my research, I have been able to measure the effects of disinformation relating to vaccines which circulated on Twitter during the vaccine campaign in the United States.  Specifically, we showed that the counties in which most disinformation was consumed were linked to certain unvaccinated population “clusters” which could have jeopardised the eradication of the virus».

What are the characteristics of fake news?

«Although there are studies that have shown that “fake news spreads more quickly than real news”, I believe that the term “fake news” has been misused by many to refer instead to a huge variety of unreliable and potentially harmful content. I think that we must consider a range of falsehoods or unreliability, and so rather than looking for characteristics common to all fake news, we should try to understand why some content spreads in a certain away and has a certain impact».

What could individual platforms or countries do to mitigate the problem without falling into censorship?

« As the professionals say, there is still no silver bullet, no ultimate solution to resolve this problem. Personally, I believe that the platforms are doing, and have done, too little on this matter, and that the countries have not sufficiently highlighted the severity of the problem. At the same time, I understand the difficulty of applying a strategy which does not violate freedom of speech in any way.  All told, I feel that we can agree to more interventions from above (by platforms) when it is a matter of objective and blatant incitement of violence and other negative consequences in the real work (like in the case of the Capitol Hill attack in the USA)».

Editorial teams in the media use journalists dedicated to so-called fact-checking, that is confirming the reliability of certain viral news. Is this approach delivering?

«Scientific papers show contrasting results regarding the efficacy of fact-checking. I think it is important to have these tools, however I do not believe that they can make a serious difference or make a notable contribution to the fight against disinformation».

How can we defend ourselves as users? What can we do to avoid falling into the trap and becoming an echo chamber for misleading information?

«It is clearly important to learn to consume online content in a healthy and intelligent way, but at the same time I think that it would be a mistake to entrust this responsibility to individuals. We are inundated by billions of articles, photos and videos on a daily basis; I believe that the heavy lifting should be done by the platforms. All too often they show that they are not doing enough to protect their users».

 

 

Condividi