Facebook admits to lag behind in detecting Hate speech

Image Credits: CNN.com

Annual Transparency report has recently been published by Facebook that states the number of items that were removed as a result of the violation of the content standards. The company has been successful in removing nudity and terrorist content while Facebook has admitted that they were not as successful against hate speech.

Facebook recently admitted its failure against the hate speech in its 86 page annual transparency report. The social media platform admitted the fact that it has been successful in removing offensive content like spam, nudity and terrorism but lagged behind in removing hate speech.

Following the data leak scandal that compromised the confidential data of billions of Facebook users, users lost their trust in the popular social media platform. Facebook lost its credibility to deal with security issues and protecting the user’s data. Since then Facebook is trying to minimize the damaged caused and the annual transparency report is a step towards those efforts.

Mark Zuckerberg promised the users that necessary steps will be taken to make sure that such incidents should not happen in future. Facebook has also set standards and guidelines recently regarding the violation of standards in user’s posts. The company is also hiring thousands of moderators to keep an eye on offensive content that is shared by users on the platform.

Despite spending millions of dollars on content review in 2018, Facebook is yet not been completely successful in removing offensive content like nudity, violence, spam and terrorism. Other than all those mentioned categories Facebook is struggling to fight against hate speech. Facebook’s AI has detected the least amount of content having hate speech before the users reported it as compared to the other categories.

“For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38% of which was flagged by our technology.”, stated Guy Rosen The VP of Product Management in his blog post.

The problem is not because of incapabilities of Facebook, the Artificial Intelligence algorithm is only able to identify the hate speech content to a limited capacity at the moment. But the limitation can be understood by keeping in mind the fact that we as humans even struggle to the understand the sound difference in offensive content, and cultural nuance, especially in written text.  So, the algorithm is not obviously able to understand the historic context in the content effectively.

Detection of hate speech is being considered as very important in order to understand the sentiments of the users. Previously Facebook has taken down the posts of users by mistakenly considering them as offensive or having hate speech even though the users were condemning hate speech. Facebook had to apologize to the act but still, it hurt the sentiments of the users. Sometimes users flag a post as offensive but Facebook rejects the complaint by stating that the flagged post is not violating the rules and regulations.

Even though the CEO of Facebook Mark Zuckerberg is hopeful that the AI technology will definitely improve in future that will result in improving the hate speech detection ratio. But for this to happen it is expected that we will have to wait for the next few years. 

Source: Facebook

Rabia Noureen
Rabia has done MS in Computer Software Engineering. Here at Geekviews, She delivers insightful analysis on Artificial intelligence and social media. Prior to her writing career, she has also worked in the IT industry. You can reach her out at: [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here