Twitter moves against ‘troll’ behaviour

Social media giant to change how problem tweets appear in searches and conversations

Twitter’s move against trolling is part of the company’s attempts to improve what it describes as the health of public conversation on its platform.
Twitter’s move against trolling is part of the company’s attempts to improve what it describes as the health of public conversation on its platform.

Twitter is taking steps to deal with “troll” behaviour that “distort and detract” from the public conversation on the platform, changing how such tweets appear in public searches and conversations.

The move is a global one, and is designed to deal with behaviour that doesn’t breach Twitter’s usage policies, but doesn’t add anything either. The content itself will remain on Twitter but will only be shown if people click on “show more replies” or choose to see everything in their searches.

The company said it would look at a range of “signals” – such as people signing up for multiple accounts simultaneously, accounts that have not confirmed their email address, accounts that repeatedly Tweet and mention accounts that don’t follow them, and behaviour that might be a co-ordinated attack. Twitter said it would also look at how these accounts interacted with and were linked to those users who violate Twitter’s rules.

Policies

The move was announced in a blog post by Twitter’s vice-president of trust and safety Del Harvey and director of product management David Gasca.

READ MORE

“Some of these accounts and tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation,” the company said.

According to Twitter, the majority of accounts reported make up less than 1 per cent of its total user base, which currently stands at 336 million monthly active users. It said much of what was reported didn’t violate its rules. However, that didn’t mean it wasn’t affecting the conversation on the platform.

The move is part of Twitter’s attempts to improve what it describes as the health of public conversation on its platform. The firm has come under fire in recent years for how it handles reports of abuse by other users and has made a series of changes aimed at addressing that. However, some believe it has not gone far enough and still has work to do.

The company currently uses a mixture of machine learning, human review processes and policies to determine how tweets are organised in conversations and search.

“By using new tools to address this conduct from a behavioural perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us,” the post said.

Methods

The company has been testing the new methods in markets around the world, and claimed it had seen abuse reports fall as a result, with a 4 per cent drop in reports from searches and an 8 per cent decline in reports from conversations.

“That means fewer people are seeing Tweets that disrupt their experience on Twitter,” the post said.

However, the company acknowledged there was more work to do, and warned there would be “false positives” and missed incidents as the technology bedded in.

“This technology and our team will learn over time and will make mistakes,” the post said. “Our goal is to learn fast and make our processes and tools smarter. We’ll continue to be open and honest about the mistakes we make and the progress we are making.”

Ciara O'Brien

Ciara O'Brien

Ciara O'Brien is an Irish Times business and technology journalist