Twitter to begin asking users to review ‘offensive’ tweets before sharing

0
7

[ad_1]

Will prompts make people play nicer?

In an effort to become a kinder social network, Twitter is testing a function that asks users to reconsider what they share before smashing the send button. 

In 2020, the notoriously toxic network tested prompts encouraging users to pause and reconsider potentially mean or offensive replies before posting them. Based on feedback from the trial, Twitter determined the effort to be effective. On Wednesday, the company announced it would begin rolling out the prompts across iOS and Android, beginning with English-language accounts. 

The prompts “resulted in people sending less potentially offensive replies across the service, and improved behavior on Twitter,” a product manager and designer wrote in a post to the company’s blog. After being prompted, 34% of people revised or deleted their initial reply and, on average, composed 11% fewer offensive replies moving forwards. The prompts also successfully lessened the amount of “harmful or offensive” responses people received, the company found. 

The prompts being rolled out this week are significantly improved from the early tests, the blog post assures. “In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn’t differentiate between potentially offensive language, sarcasm, and friendly banter,” they added.

By encouraging people to hesitate before posting, Twitter hopes to prevent users from hastily sending regrettable content while overcome by emotion. “We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter’s global head of site policy for trust and safety, told Reuters. 

The platform has other policies in place to police less nuanced, more aggressive posts: Twitter users are disallowed from targeting others with racist or sexist tropes, degrading content or slurs. Enforcing this is an uphill battle, and between just January and June of 2020, action was taken against over 584,000 accounts for violating hateful conduct policies, Reuters reported.



[ad_2]

Source link