Software Flaw Exposed Facebook Moderators’ Info to Suspected Terrorists

“The punishment from ISIS for working in counter-terrorism is beheading. All they’d need to do is tell someone who is radical here”

Mikael Thalen
June 16, 2017

A flaw in software used by Facebook to moderate offensive content exposed more than 1,000 employees to suspected terrorists online.

According to a report from The Guardian’s Olivia Solon Friday, the social network found that moderators in 22 departments had their personal profiles become visible to suspected extremists after the issue was discovered in late 2016.

The profiles of the employees, who were tasked with removing terrorist propaganda and other banned content, began “automatically appearing as notifications in the activity log” of Facebook groups whose administrators were flagged and removed. The remaining members of those groups were then able to view the moderators’ personal details.

Roughly 40 of the 1,000 exposed employees worked in Dublin, Ireland, at Facebook’s counter-terrorism unit. Of those 40, six were labeled “high priority” after the social network determined “their personal profiles were likely viewed by potential terrorists.”

Speaking with The Guardian, one of the six employees, an Iraqi-born Irish citizen who asked to remain anonymous, stated that seven people linked to an Egyptian terrorist group sympathetic to Hamas and ISIS had seen his profile.

The moderator, who worked as a contractor for Facebook on behalf of Cpl Recruitment, fled the country shortly after over fears of retaliation.

  • A d v e r t i s e m e n t

“The only reason we’re in Ireland was to escape terrorism and threats,” he said, revealing how numerous members of his family had been beaten and executed in Iraq.

Although Facebook initially “offered to install a home alarm monitoring system and provide transport to and from work” to high-priority moderators, the Iraqi-born man felt he had become…

Read more