Twitter rolls out new abuse controls, including policy changes and technology to detect trolls


Twitter has rolled out huge new changes to stop abuse and harassment, the latest update to try and fix the network’s admitted problems with trolling.
The changes include policy updates and the roll-out of new technology, both of which are aimed at making users less likely to come into contact with abuse, and less able to troll other users.
The site has updated the wording of its policies to do with abuse. Where it once only prohibited “direct, specific threats of violence against others”, it now bans “threats of violence against others or promot[ing] violence against others”.
The other policy update allows for different kinds of blocks on users. Where once they could only be banned entirely and have problem content removed, Twitter can now add more specific blocks.
Users can be banned for just a short period of time, for instance. When they return to Twitter, they may be asked to complete certain checks, such as confirming that they will abide by Twitter’s rules.
How the temporary account lock feature will work The site is also introducing new algorithms for automatically spotting abuse and counteracting it before they are shown. While content will still only be deleted by human members of staff, the site will be able to automatically detect abusive messages and keep them out of users’ mentions, meaning that they won’t see it unless they choose to.
“This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive,” wrote director of product management Shreyas Doshi in a blogpost announcing the changes.
Twitter has rolled out huge new changes to stop abuse and harassment, the latest update to try and fix the network’s admitted problems with trolling.
The changes include policy updates and the roll-out of new technology, both of which are aimed at making users less likely to come into contact with abuse, and less able to troll other users.
The site has updated the wording of its policies to do with abuse. Where it once only prohibited “direct, specific threats of violence against others”, it now bans “threats of violence against others or promot[ing] violence against others”.
The other policy update allows for different kinds of blocks on users. Where once they could only be banned entirely and have problem content removed, Twitter can now add more specific blocks.
Users can be banned for just a short period of time, for instance. When they return to Twitter, they may be asked to complete certain checks, such as confirming that they will abide by Twitter’s rules.
How the temporary account lock feature will work The site is also introducing new algorithms for automatically spotting abuse and counteracting it before they are shown. While content will still only be deleted by human members of staff, the site will be able to automatically detect abusive messages and keep them out of users’ mentions, meaning that they won’t see it unless they choose to.
“This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive,” wrote director of product management Shreyas Doshi in a blogpost announcing the changes.

“It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content.”
The feature is similar to an existing one, quality filtering, but that is only turned on for verified users. It is also thought that the algorithm for the new feature is less aggressive than that one, which can also be manually turned on and off.
The changes come on the same day that Twitter turned on the ability to receive direct messages from any user. Some have already said that feature could have implications for abusive behaviour, allowing users to contact people in private and making it harder for harassment to be seen by others.
It received support from Twitter users, many of whom have been critical of Twitter's work to combat the use of its network for abuse.icitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content.”
The feature is similar to an existing one, quality filtering, but that is only turned on for verified users. It is also thought that the algorithm for the new feature is less aggressive than that one, which can also be manually turned on and off.
The changes come on the same day that Twitter turned on the ability to receive direct messages from any user. Some have already said that feature could have implications for abusive behaviour, allowing users to contact people in private and making it harder for harassment to be seen by others.
It received support from Twitter users, many of whom have been critical of Twitter's work to combat the use of its network for abuse.

0 komentar