Facebook Closes 583 Fake Accounts

Facebook Closes 583 Fake Accounts

Facebook revealed Tuesday that it removed more than half a billion fake accounts and millions of pieces of violent or obscene content during the first three months of 2018, pledging more transparency while shielding its chief executive from new public questioning about the company's business practices.

The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier.

The report looks at Facebook's enforcement efforts from Q4 2017 and Q1 2018, and shows an uptick in the prevalence of nudity and graphic violence on the platform. Several members of the Facebook leadership team are in Paris to discuss enforcing community standards and removing bad content. Of the total 2.5 million hate speech posts removed, only 38 percent were pulled by Facebook's tech before users reported it. Compare that to the 95.8 percent of nudity or 99.5 percent of terrorist propaganda that Facebook purged automatically.

The full Facebook report is available online and details both Facebook's commitment to their content values as well as some more metrics regarding the deletion of 583 million fake Facebook accounts and harsher crack down on the various offending content.

The crackdown is real: in the past six months alone, Facebook has identified and suspended 1.3 billion fake accounts.

World Health Organization pushes ban of margarine by 2023
Nevertheless, the country has the opportunity to address its health and economics by supporting more locally produced oils. Endevelt said trans fats may still be found here in pastries and other baked goods in small bakeries or in popcorn.

Nearly 86 per cent was found by the firm's technology before it was reported by users.

Facebook, which like Google is now working on A.I. technology to identify "hate speech", admitted that its current hate-speech-detection technology "still doesn't work that well" and that automatically flagged content "needs to be checked by our review teams".

Spam continues to be a problem at Facebook and a whopping 837 million pieces were removed from its service during the first quarter.

"Whether it's spam, porn or fake accounts, we're up against sophisticated adversaries who continually change tactics to circumvent our controls", Mr Rosen said.

Facebook says AI has played an increasing role in flagging this content. "While not always flawless, this combination helps us find and flag potentially violating content at scale before many people see or report it". But, as Schultz made clear, none of this is complete. "And it's created to make it easy for scholars, policymakers and community groups to give us feedback so that we can do better over time". On Tuesday, the Menlo Park-based company shared numbers from its first Community Standards Enforcement Report that help illustrate Facebook's performance as of late. The company has evaluated thousands of apps to see if they had access to large amounts of data, and will now investigate those it has identified as potentially misusing that data, it said in a blog post. The company's CEO Mark Zuckerberg promised the investigation as one of a number of measures put in place to handle the scandal.

Related Articles