UPDATE: As Trump challenges social-media companies, how Twitter, Facebook and YouTube deal with misinformation and glorification of violence
By Meera Jagannathan, MarketWatch , Jillian Berman
'One person's misinformation might be another person's truth'
As President Trump seeks to limit social-media companies' power, the internet's biggest platforms have come under scrutiny yet again for how they deal with controversial content -- whether it's posted by the president or the average user.
Trump signed an executive order last week challenging protections afforded by Section 230 of the 1996 Communications Decency Act (https://www.eff.org/issues/cda230), which says that online platforms shouldn't be held liable (http://www.marketwatch.com/story/heres-what-you-need-to-know-about-section-230-a-rule-that-made-the-modern-internet-2020-05-28) for content provided by users, claiming it was not intended to afford large social-media platforms "blanket immunity when they use their power to censor content and silence viewpoints that they dislike." Legal experts have deemed the order mostly toothless (http://www.marketwatch.com/story/trump-executive-order-to-punish-social-media-platforms-is-largely-toothless-legal-experts-say-2020-05-28), but say that it could pave the way for legislation.
The order came two days after Twitter (TWTR) affixed fact-check labels to two Trump tweets that made unsubstantiated claims about mail-in ballots. Mark Zuckerberg, the CEO of rival Facebook (FB), responded by claiming that "Facebook shouldn't be the arbiter of truth of everything that people say online," echoing his previous comments (https://www.nytimes.com/2016/11/21/business/media/zuckerberg-and-facebook-must-defend-the-truth.html) on the matter. (As MarketWatch has pointed out (http://www.marketwatch.com/story/facebook-shouldnt-be-the-arbiter-of-truth-zuckerberg-tells-fox-news-2020-05-27), Facebook removed a Trump campaign ad (http://www.marketwatch.com/story/facebook-removes-misleading-trump-campaign-ads-after-pelosi-cites-census-confusion-2020-03-05) in March to prevent confusion over the 2020 Census.)
Twitter made another unprecedented move Friday morning after the president wrote that "when the looting starts, the shooting starts" in a tweet about protests over the death of George Floyd (http://www.marketwatch.com/story/this-shouldnt-be-normal-says-obama-of-george-floyd-death-in-minnesota-but-for-millionsbeing-treated-differently-on-account-of-race-is-tragically-painfully-maddeningly-normal-2020-05-29), a black man who died in Minneapolis police custody. The tweet, while still viewable because "it may be in the public's interest for the Tweet to remain accessible," is shielded by a message noting that it violates Twitter's rules about glorifying violence.
Trump attempted to clarify his comments in a tweet Friday afternoon.
"Looting leads to shooting, and that's why a man was shot and killed in Minneapolis on Wednesday night - or look at what just happened in Louisville with 7 people shot. I don't want this to happen, and that's what the expression put out last night means...." he wrote. "It was spoken as a fact, not as a statement. It's very simple, nobody should have any problem with this other than the haters, and those looking to cause trouble on social media. Honor the memory of George Floyd!"
When it comes to judging content, Twitter, Facebook and Google-owned YouTube (GOOGL) are all essentially trying to predict the harm that might be associated with various pieces of content and then moderate off of that particular harm, said Cliff Lampe, a professor of information at the University of Michigan.
"[They're] looking at things that pretty much any reasonable person would agree would harm society, whether that be a crime, violence against people, or misinformation," he said. "Of course, misinformation is the most contentious of those -- because in this current context, one person's misinformation might be another person's truth."
Though all of these platforms are wrestling with these issues, their approaches to them -- and how those actually play out on the platforms -- differ slightly, said Henry Fernandez, a senior fellow at the Center for American Progress's Action Fund and the co-chair of the Change the Terms Coalition, a group of 40 organizations working to combat hateful content on technology platforms.
For example, Twitter allows white supremacists and white nationalists to have Twitter accounts, whereas Facebook and YouTube generally do not. (All three will typically remove a user for repeatedly inciting violence.) In addition, the platforms also have policies that regulate content promoting misinformation, particularly around voting.
"Where they have gotten into difficulty is around the issues of how they will enforce their rules when you're talking about elected officials," Fernandez said. "All of the platforms have drawn distinctions on elected officials." In other words, their content is typically not fact-checked or removed in the same way a similar sentiment would be if it came from a regular user.
Twitter's steps this week to alert users when content from elected officials either provides misinformation about voting or incites violence contrast with how the other platforms treat this content, Fernandez said. The move "represents a remarkable effort by Twitter and its leadership to protect the First Amendment," he said.
Of course, moderation efforts can have their shortcomings, Lampe said. For example, Twitter will get push back on flagged content as people argue about what is and isn't true, he said; plus, "there's just too much content for them to do it evenly across the board, so it's going to feel unfair to a lot of people." And Facebook's approach of shutting down groups helps to take "a big node out of a network," he said, but it's easy enough to create new groups in their place -- "and, of course, you don't have to create groups to have bad content and share misinformation."
Here's what the world's biggest social-media companies have said in recent months about harmful content, misinformation and hate speech:
Earlier this month, Twitter announced it would introduce new labels and warnings to combat "potentially harmful and misleading content" (https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html) related to COVID-19. Labels on such content would link users to information from an "external trusted source" or a page curated by Twitter, the company said.
The company may also apply a warning that a tweet conflicts with public-health guidance before users are allowed to view it, it said, "depending on the propensity for harm and type of misleading information."
Twitter outlined in a chart how it would (or wouldn't) act on false or misleading content, depending on the propensity for harm: Misleading information with a severe propensity for harm warrants removal, for example, while disputed information with a severe propensity for harm receives a warning. Meanwhile, the company said it would take no action against unverified claims.
Twitter had earlier updated its rules for "synthetic and manipulated media" (https://blog.twitter.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media.html) in February, laying out new criteria for labeling and removing such posts that might impact public safety or cause serious harm. Harms under consideration include threats to a person or group's physical safety; threats to privacy, freedom of expression or civic participation; and risk of mass violence.
"You may not deceptively share synthetic or manipulated media that are likely to cause harm," head of site integrity Yoel Roth and group product manager at Ashita Achuthan wrote in an official Twitter blog post. "In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media's authenticity and to provide additional context."
The company also has a "glorification of violence" policy that prohibits celebrating, praising or condoning violent crimes, violent events that targeted people because of their protected-group status, and perpetrators of such violence.
A Twitter spokeswoman declined to comment for this story.
Facebook, which has come under fire in the past (https://www.reuters.com/article/us-facebook-congress/facebooks-zuckerberg-grilled-in-u-s-congress-on-digital-currency-privacy-elections-idUSKBN1X2167) for allowing political misinformation to proliferate on its platform, says in its "false news" policy (https://www.facebook.com/communitystandards/false_news) that it wants to keep users informed "without stifling productive public discourse."
"There is also a fine line between false news and satire or opinion," the company says. "For these reasons, we don't remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed."
On the COVID-19 front, Facebook said in April (https://about.fb.com/news/2020/04/covid-19-misinfo-update/) that it was connecting users to credible public-health resources and stemming the spread of "misinformation and harmful content" by enlisting a growing army of fact-checking organizations.
"Once a piece of content is rated false by fact-checkers, we reduce its distribution and show warning labels with more context," Facebook said. "Based on one fact-check, we're able to kick off similarity detection methods that identify duplicates of debunked stories." The company later said it had applied warning labels to some 50 million pieces of COVID-19-related content in April based on about 7,500 articles by fact-checking partners.
Facebook has also said it will provide News Feed messages to people who had previously interacted with harmful, since-removed COVID-19 misinformation, and connect them with information from reliable sources.
(MORE TO FOLLOW) Dow Jones Newswires
June 01, 2020 22:16 ET (02:16 GMT)
Copyright (c) 2020 Dow Jones & Company, Inc.