the_post_thumbnail_caption(); ?>
One year after Zuckerberg’s testimony about violent content on Facebook, has anything changed?
By Andrew Keshner and Jacob Passy
MarketWatch
At least 49 people were killed in a mass shooting in two mosques in New Zealand on Friday, March 15. The perpetrator broadcast live footage of the shooting on Facebook.
Facebook and other tech giants like Twitter TWTR, +1.30% and YouTube GOOG, -1.24% were under scrutiny because of what people saw and shared through their sites: The 17-minute livestreamed mass shooting, not to mention links to a manifesto apparently inspired by white nationalism.
The three social-media companies told MarketWatch they had taken down the content, suspended accounts, were working authorities and were on guard to remove further posts. Facebook “quickly removed” the video when New Zealand police alerted the company, a company spokeswoman said.
Violence broadcast online isn’t unprecedented. Four people pleaded guilty to charges in connection to the livestreamed 2017 beating of a Chicago teen with special needs. A 74-year-old Cleveland man was shot dead in 2017, and then the murder was posted on Facebook. That year, BuzzFeed did its own count and said there were at least 45 times that violence was broadcasted over Facebook Live since its December 2015 start.
These sites insist they don’t idly stand by, even with the sheer amount of posting and sharing that goes one. Facebook, Twitter and YouTube all have posting policies, computers tasked with spotting and removing content in violation of their policies. Humans too — a reportedly difficult job.
“Human moderators still play a major role in removing objectionable social-media content, said Jennifer Golbeck, a professor at the College of Information Studies at the University of Maryland.”
“Automated approaches to understand video and images just aren’t good enough to rely on at this point,” Golbeck said.