Twitter is experimenting with automatically limiting the reach of unwanted tweets

Spread the love

Twitter is ramping up the fight against trolls – users who harass others. The company is currently testing an algorithm that recognizes offensive tweets and then automatically limits the reach of those messages.

That made Twitter announced on his blog on Tuesday. The company hopes that the algorithm will help users feel safe on the social network. Twitter believes that everyone should be able to express themselves fully without being immediately harassed by offensive tweets.

The algorithm that Twitter is currently testing takes into account, among other things, the context of a message. To do this, the program looks for matches with other messages that have been flagged as offensive in the past. In addition, it looks at how long the account has existed. The software does not pay attention to how unpopular a tweet is, the American company claims.

Anyone who sends an insulting tweet will not notice that the algorithm intervenes, but under the hood the program limits the range of the message. In this way, Twitter hopes that the program can limit the potential damage that an offending tweet can inflict.

Twitter is also going to adjust its so-called violent threats policy. In concrete terms, this means that the company not only sees ‘direct, specific threats’ as undesirable, but also encourages threats made by others. According to Twitter, the change will help the company fight trolls.

Twitter is also going to be tougher on trolls. A special enforcement team takes action when users request that offensive content be removed. The team can do this by temporarily blocking access to an account. That block can only be lifted once a user provides his valid telephone number – and thus identifies himself.

Twitter has been trying to crack down on abuse on its service for some time now. Earlier, Twitter CEO Dick Costolo said in internal documents that the company “sucks” when dealing with abuse. That’s why the company already provided the option for bystanders to report someone for abuse, while previously only a victim could do that.

Science is also concerned with containing provocative, insulting or threatening comments. Researchers at Stanford University developed a tool that can automatically recognize antisocial behavior of internet users. They used machine learning for that.

You might also like
Exit mobile version