This team of specialists has "significantly grown" over the a year ago, according to a Facebook blog post Thursday detailing its efforts to crack down on terrorists and their posts. The first post addresses how the company responds to the spread of terrorism online. "There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way", the report stated.
Bickert and Fishman said that when Facebook receives reports of potential "terrorism posts", it reviews those reports urgently.
In years past, Facebook and other social media companies relied heavily on the manual effort of human moderators to identify and potentially block or delete offensive content.
The new AI system at Facebook will use natural language identification technology to pick up on key words, phrases and the general activities of those who may be using the social network for illicit purposes.
"We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook", Monika Bickert, Facebook's director of global policy management, and Brian Fishman, the company's counterterrorism policy manager, said in the post.
They revealed that the company has a team of more than 150 counterterrorism specialists, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, working exclusively or primarily on countering terrorism as their core responsibility.
North says Kim supervised test of new anti-ship missile
A senior US government official also forecast that the North will likely be ready to test-fire an ICBM within the year. Its mission is to develop a missile able to deliver a nuclear warhead to the continental United States.
But they remain under intense scrutiny, and G7 leaders last month issued a joint call for internet providers and social media firms to step up the fight against extremist content online. "Part of this is telling our community, 'This is our commitment and this is how we are living up to it'".
Meanwhile, because AI can't catch everything and sometimes makes mistakes, Facebook is also beefing up its manpower: it previously announced it would hire an extra 3,000 staff to track and remove violent video content.
"We're now experimenting with analyzing text that we've already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda".
"We are absolutely committed to keeping terrorism off our platform", they said.
WhatsApp's end-to-end encryption, however, means that Facebook has no access to the content of most messages and it can't deploy the same image and text analysis tools.
And the social network is using software to try to identify terrorism-focused "clusters" of posts, pages, or profiles.