Twitter has long struggled with discourse between opposing viewpoints, specifically with respect to politics.
Some methods the website has used, such as the removal of 70 million bot-run accounts and the reinforcement of its user verification process seem to be legitimate efforts in the right direction. In addition, Twitter's recent acquisition of Smyte, a company that develops software to mitigate spam and abuse on the internet, also seems like an active effort towards a friendlier overall platform.
In the broader scheme of affairs, however, these steps do come across as stopgap measures. Bearing this in mind, Twitter now aims to dig deeper into the cause of such divides and the repercussions of these online echo chambers on Twitter users in real life.
Back in March, in a series of tweets, Twitter CEO Jack Dorsey announced the beginnings of a plan through which he aims to make Twitter a safer space for discourse and clashing ideologies. The plan would involve working with two independent firms - Cortico and Social Machines - to evaluate the quality of discourse on the platform, in addition to taking suggestions directly from its users.
He also acknowledged some of the platform's failings in dealing with problems of rampant racism and toxic rhetoric within its userbase, and that simply removing such content without "building a framework to help encourage more healthy debates, conversations and critical thinking" would serve to ultimately change nothing.
Twitter is now taking an additional step forward, and has announced a partnership with Dr. Rebekah Tromble of Leiden University and her team of researchers. The team aims to investigate the formation of these aforementioned echo chambers that form around points of view, especially in political discussion, by observing the number of times a user would engage with an opposing viewpoint. Its past findings have indicated that echo chambers often lead to increased hostility and resentment towards groups not part of similar conversations.
Moreover, the 'civility' (or lack of it thereof) in such discourses would be measured based on how aggressive interactions are, based on social norms of politeness and courtesy. Racism, xenophobia and other forms of "intolerant discourse" will be observed as "threatening to democracy", and will be taken into account in order to develop algorithms that 'measure' the healthiness of such exchanges on Twitter. According to Dr. Tromble:
“In the context of growing political polarization, the spread of misinformation, and increases in incivility and intolerance, it is clear that if we are going to effectively evaluate and address some of the most difficult challenges arising on social media, academic researchers and tech companies will need to work together much more closely."
In addition to the aforementioned team of scholars, Twitter will also be partnering up with Professor Miles Hewstone and John Gallacher from the University of Oxford, and Dr. Marc Heerdink of the University of Amsterdam. This team will be working in parallel to apply real-life social psychology and Professor Hewstone's extensive experience in the field of intergroup conflict in order to grasp the complexity of the issue at hand, and to integrate it into Dorsey's proposed "framework" for the platform. Per Professor Hewstone:
"Evidence from social psychology has shown how communication between people from different backgrounds is one of the best ways to decrease prejudice and discrimination. We’re aiming to investigate how this understanding can be used to measure the health of conversations on Twitter, and whether the effects of positive online interaction carry across to the offline world.”
The nature of conflict over the internet and its impact on real-life rhetoric is an extremely complex subject, and the toxicity that accompanies it, unfortunately, comes with no real, catch-all solution. Moreover, the entire initiative outlined above is still in its infancy - there's no word yet on how these algorithms and studies will be applied to the social network, and even less, any concrete assurance that these measures would even be effective or worse still, detrimental to the cause.
Cynicism at this early stage is certainly counter-productive, however; the outcome of this study would still provide great insight into the various ways in which users shut themselves off when presented to differing views and ideas, and even though this may not be a conclusive answer to our collective social media woes, it may go a fair way towards painting a clearer picture of what the question was in the first place.
Source: Twitter Blog via Engadget
14 Comments - Add comment