Foto 7

M. Mazza, G. Cola, "The role of Twitter fraudulent accounts during the COVID-19 pandemic: a case study." The 1st International Workshop on Information Disorder: Disinformation and Coordinated Inauthentic Behavior (DISINFO’20), 2020

Written by

The use of social media, like Twitter or Facebook, as a means to gather information on disparate subjects has increased dramatically over the last few years. A growing number of users rely on such applications even to retrieve emergency updates, like those regarding the Covid-19 pandemic. This use of social platforms has raised significant concern, as it has been shown that malicious bots (i.e., clusters of automated and coordinated accounts) are extensively exploited to manipulate information and deceive users. In this context, a key role is played by the accounts sold in underground markets. Account markets are commonly used to purchase ready-to-use accounts in bulk and use them for malicious purposes. This paper presents a case study on the use of Twitter fraudulent accounts, sold by account merchants, to spread disinformation related to the Covid-19 pandemic. We have monitored a popular Twitter account market from June 2019 to the end of April 2020, and detected around 25,000 fake accounts for sale. The merchant we have studied shows a list of sample accounts to potential buyers, which is updated daily. This list includes full screen names, which can be used to retrieve the corresponding user IDs. A Twitter user ID cannot be changed over time. Throughout the duration of this study, we passively monitored the merchant’s website for new samples, so as to get the respective user IDs. To the best of our knowledge, this is the first work that passively monitored fake Twitter accounts sold by merchants, without interfering in any way with their activity. Right after detection, these accounts were passively monitored to keep track of all the changes in profile information as well as all the activity on Twitter. Detection and monitoring have been conducted in parallel since July 2019. However, as this study is focused on Covid-related disinformation, we applied a filter to select only the accounts that retweeted Covid-related content in the period between January 1 and April 30 2020. Among monitored fake accounts, 859 accounts that tweeted or retweeted Covid-related keywords were identified, regardless of the language used. Since previous research have confirmed the importance of common retweeting activity to identify coordinated behaviour, we hypothesize that a co-retweet graph could help identify clusters of coordinated accounts. We used the state-of-the-art Louvain algorithm to identify communities of accounts showing a similar retweeting behavior in the co-retweet graph. This resulted in the detection of 26 communities of accounts that are strongly linked by similar retweeting patterns.