Facebook’s algorithm detects a small part of hate-mongering posts

Spread the love

Facebook’s algorithm tracks a small portion of posts that violate hate speech rules, according to internal documentation. The company says in a response that it mainly focuses on how often users encounter such posts.

Facebook itself estimates that it detects 3 to 5 percent of hate speech posts, The Wall Street Journal reports. For posts that call for violence, this is 0.6 percent. This concerns posts that the algorithm found independently without reporting and that have been removed for that reason. According to Facebook itself, in 98 percent of the cases, the algorithm independently detects posts with problematic content, even before a user reports it.

In a response, Facebook says that it does not recognize itself in the picture that has been given that the algorithm is not doing as well as the company makes it clear to the outside world. In doing so, the company wants to focus mainly on the proportion of hate-mongering content that users see. That share has fallen to 5 in 10,000 and was double nine months ago. This is partly because the algorithm, if it is not sure enough, limits the distribution of the post without removing it, Facebook says.

In addition, the algorithm has problems recognizing content. For example, cockfights were labeled as a car crash, while a car going through the car wash was labeled as video of a shooting by the algorithm.

Facebook arrives at the numbers by taking a sample of all posts on Facebook and then having it checked by people. This should provide a picture of what the algorithm has missed. The Wall Street Journal got its hands on the internal documentation by a whistleblower.

You might also like
Exit mobile version