Model aims to combat misinformation by tracking trolls in real-time

Researchers from Texas Tech University have recently submitted a new paper to arXiv which outlines a new predictive model they"ve created that highly predicts if an account is a Russian troll on X (formerly Twitter). The researchers, Sachith Dassanayaka, Ori Swed, and Dimitri Volchenkov said that their machine learning model can accurately detect Russian troll accounts with 88% accuracy and that their technique can be applied on other social media networks.

To ensure their model works on different datasets, the researchers tried it on two different datasets of Russian troll posts and it achieved 90.7% accuracy on the first one and 90.5% accuracy on the second one. The authors of the paper believe that social networks could use their model to track trolls in real-time, this could help reduce the spread of misinformation around elections.

During the study, the researchers found that eight features were most able to help weed out the trolls, these included post count, repost count, follower count, following count, replies count, likes count, users mentioned, and hashtags count. The paper suggests that follower count, hashtag count, and tweet count are some of the most effective ways to identify a troll.

Not only is the researchers" model able to detect trolls, but it"s also able to categorize them into one of four categories. The categories are fake news, organizations, political affiliates, and individuals.

  • The fake news group involves accounts that post as legitimate news outlets and aim for credibility so they can spread disinformation and be trusted by readers.
  • Organization accounts often act as though they"re NGOs, charities, or community groups. They aim to build trust and influence conversations around political and social issues. These are able to push certain agendas while appearing to be grassroots movements.
  • Political affiliates are open about their political opinions. The researchers say they"re not inherently malicious but these accounts can be problematic if they promote extreme views, spread propaganda, or manipulate political discourse. They have the power to sow discord and influence political conversations.
  • Finally, you have individual accounts. These are the most common trolls and portray themselves as normal users with genuine profiles and interests. These accounts will engage in conversations, share content, and build networks while pushing their agenda. The researchers said these accounts are challenging to detect but doing so is important as these accounts create the illusion there is widespread support for issues.

The researchers say that their model can be used to detect other influence networks, not just Russian ones. They believe that this research can be used to improve existing detection tools to improve the social media experience for normal users and reduce instances of fake news.

Source: arXiv

Report a problem with article
Next Article

Intel reportedly misses out on $30 billion PlayStation 6 chip deal

Previous Article

Microsoft sends "Final Notice" before you lose Outlook access on third-party apps today