Facebook, Facebook bug, Facebook moderators, Facebook terrorist group profiles, Facebook artificial intelligence

Facebook might have exposed identities of more than 1,000 of its content moderators across 22 departments to suspected terrorist users of the platform. According to a report in Guardian, a software bug, which was discovered in November last year, caused profiles of content moderators appear automatically as notifications in Facebook groups. These moderators remove pages from Facebook having inappropriate content like terrorist propaganda, hate speech, and sexual material.

Facebook confirmed the security breach in a statement to Guardian, and said the company has fixed the issue. Further, according to the report, Facebook has made technical changes to “better detect and prevent these types of issues from occurring”.

“Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland. Six of those were assessed to be “high priority” victims of the mistake after Facebook concluded their personal profiles were likely viewed by potential terrorists,” reads the report.

Guardian quoted one of the victims, an Iraqi-born Irish citizen, who told the site that he has gone into hiding after he found out that members associated with suspected terrorist groups (whom he had banned from Facebook) have viewed his profile.

Meanwhile, Facebook recently announced it has started using artificial intelligence (AI) to stop the spread of terrorist content on the platform. The social media giant has is deploying techniques like image matching, language understanding, and cross-platform collaboration to help combat terrorists’ use of its service.

Additionally, human expertise which includes reporting accounts related to terrorism as well as expanding terrorism and safety specialists team will help Facebook when it comes to understanding context.


Please enter your comment!
Please enter your name here