Facebook bug may have led to the spread of ‘harmful content’
For six months, a bug at Facebook ensured that misleading and ‘harmful’ messages were shown to users, when they were supposed to remain hidden. This is according to internal documents of the company, which have been viewed by The Verge.
A bug in Facebook’s algorithm disrupted the ranking of posts, writes The Verge. This ensured that messages that were judged as ‘misleading’ by external fact-checkers were still shown to users of the platform. Reports of violence and nudity and news articles from Russian state media also appeared in users’ feeds. The display of misleading messages is said to have increased by 30 percent worldwide due to the software bug.
The increase in the spread started in October last year and ran until March 11, 2022. With that, the bug impacted Facebook’s timelines for six months as the company’s engineers couldn’t find the cause. According to The Verge, the issue was internally labeled a “huge ranking problem.” The company would have labeled the bug as a level 1 ‘site event’. According to the medium, that label is reserved for ‘technical crises’ that have a high priority.
Internal documents state that the bug was introduced to the algorithm in 2019 but didn’t create a noticeable impact until October. The Verge emphasizes in its post that there is no indication that malicious intent was behind the ranking bug and also writes that other moderation tools from Facebook were not subject to the problem.
A spokesperson for Facebook parent company Meta confirmed the incident in a statement to The Verge. In it, the company says it “discovered inconsistencies in downranking.” It would be five separate occasions, which increased “small and temporary increases” in the number of views for problematic content. “We traced the root cause to a software bug and applied necessary solutions,” the spokesperson said. Meta said the bug “has had no meaningful, long-term impact” on Facebook’s metrics. The software flaw also did not affect Facebook’s ability to remove content that explicitly violated its rules.