Hot Posts

6/recent/ticker-posts

Meta says it’s cracking down on violent content following Hamas attacks

[Collection]
Meta logo on a red background with repeating black icons, giving a squiggly effect.
Image: Nick Barclay / The Verge

In the three days following the terrorist attacks carried out by Hamas against Israel on October 7th, Meta says it removed “seven times as many pieces of content on a daily basis” for violating its Dangerous Organizations and Individuals policy in Hebrew and Arabic versus the two months prior. The disclosure came as part of a blog post in which the social media company outlined its moderation efforts during the ongoing war in Israel.

Although it doesn’t mention the EU or its Digital Services Act, Meta’s blog post was published days after European Commissioner Thierry Breton wrote an open letter to Meta reminding the company of its obligations to limit disinformation and illegal content on its platforms. Breton wrote that the Commission is “seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms” and “urgently” asked Meta CEO Mark Zuckerberg to “ensure that your systems are effective.” The commissioner has also written similar letters to X, the company formerly known as Twitter, as well as TikTok.

Meta says it “removed or marked as disturbing” over 795,000 pieces of content in the three days following October 7th for violating its policies in Hebrew and Arabic and says Hamas is banned from its platforms. The company also says it’s taking more temporary measures like blocking hashtags and prioritizing Facebook and Instagram Live reports relating to the crisis. The company says it’s also allowing content removals without disabling accounts because the higher volume of content being removed means that some may be removed in error.

The operator of Instagram and Facebook adds that it’s established a “special operations center” staffed with experts including fluent Hebrew and Arabic speakers to respond to the situation. That’s notable, given one of the major things Meta (then known as Facebook) did after being criticized for its response to genocidal violence in Myanmar was to build up a team of native Myanmar language speakers.

Even more recently, Meta’s track record on moderation has not been perfect. Members of its Trusted Partner program, which is supposed to allow expert organizations to raise concerns about content on Facebook and Instagram with the company, have complained of slow responses, and it’s faced criticism for shifting moderation policies surrounding the Russia-Ukraine war.

X’s outline of its moderation around the conflict does not mention the languages spoken by its response team. The European Commission has since formally sent X a request for information under its Digital Services Act, citing the “alleged spreading of illegal content and disinformation,” including “the spreading of terrorist and violent content and hate speech.”

Post a Comment

0 Comments