Misinformation

You are here

Twitter Using New Way to Counter Misinformation

Twitter has announced that user behavior, not simply tweet content, will now be a factor in the way conversations are modified, or even blocked from general consumption. Content from users could be demoted by the platform's algorithm if the users have been blocked frequently, if they have multiple accounts using the same IP address, or if they regularly tweet to a large number of accounts that they don't follow.

Intel Contest Includes Project on Misinformation

About 1,800 students from 81 countries competed in the recent Intel International Science and Engineering Fair, held this year in Pittsburgh. One high-school student's project was aimed at combating fake news after he says he nearly fell for a false headline on Facebook. Ryan Beam says he almost believed a headline about the Pope endorsing Donald Trump (and indeed, it was untrue), which got him to thinking of ways to look at the fake news situation for himself. His study found that people who identified themselves as Independents were the least likely to share misinformation.

Twitter is a Disaster During a Disaster

This may not surprise you, but during disasters, Twitter is full of false information. When confronted with falsehoods, “86 to 91 percent of the users spread the false news by retweeting or liking,” reports a new study from the University at Buffalo. “To the best of our knowledge, this is the first study to investigate how apt Twitter users are at debunking falsehoods during disasters,” said Jun Zhuang, associate professor of Industrial and Systems Engineering at the school, and the lead author of the study. “Unfortunately, the results paint a less-than-flattering picture”.

Adrift in the Fake News Era

National Public Radio recently interviewed the founder of Wikipedia, Jimmy Wales, about his thoughts on fake news online. It is interesting to remember that Wikipedia used to be the sole subject of so many teachers’ ire because students often used information from the site in their reports without vetting it, but now that seems almost quaint in the face of other fake news scandals.

Wales feels we all need to be skeptical of the sources of things we share online. He said many times people will find a story that confirms what they already believe about a particular subject, so they go ahead and share it. But the truth is, anyone could have written that article, and without a quick google search to vet the sources, you could just be perpetuating the problem.

 

Twitter and Facebook Support the Honest Ads Act

Twitter and Facebook are supporting the Honest Ads Act, a proposal that calls for platforms with a minimum of 50 million users to retain and make available for public viewing all political ads bought by groups investing more than $500. The bill would also require political ads to be labeled with "clear and conspicuous" disclosures and would require that platforms make "reasonable efforts" to prevent ads from being bought by foreign agencies.

Hamilton 68

Are you curious about the Russian social media disinformation campaigns that have been a hot topic in the news recently? The Hamilton 68 dashboard tracks Russian social media in real time as it appears on Twitter. Named after Alexander Hamilton's Federalist Paper 68 (on the importance of "protecting America’s electoral process from foreign meddling"), the dashboard initially tracked election-related tweets but has since expanded to additional topics, such as the Parkland school shooting. It is an interesting tool to look at with your kids when talking about misinformation online.

Curious About How Conspiracy Theories Get Spread Online?

The latest online attacks against the teen survivors of the Parkland shooting is a good case study on how this happens and how quickly it occurs. An article in The Washington Post entitled We studied thousands of anonymous posts about the Parkland attack – and found a conspiracy in the making outlines the part that anonymous social media forums play in the process. It’s a primer on how misinformation is created on purpose, endures endlessly, and the havoc that it plays in lives of those who are targeted.

Who Sponsored That Ad?

The Federal Election Commission has drawn up a proposed framework that would require political digital and social media ads to adopt the same sponsorship disclaimer rules as those appearing on TV, read on radio and in print. Political audio and video ads on both social and digital platforms would require candidates paying for the ads to say their names and include the statement, "And I approve of this message," and graphic and text ads would have to display the sponsor's name "in letters of sufficient size to be clearly readable," the proposal says. In addition, Facebook has announced that it will mail postcards to political ad buyers to verify that they live in the US. A code from the postcard will be needed to buy a political ad on the platform, and November's midterm elections will be the first time the process is used.

Facebook Users Vet New Sources

Facebook's latest news feed update will include a prioritization of news sources rated as trustworthy by "a diverse and representative sample" of its users, the company's News Feed chief Adam Mosseri wrote in a recent blog post. Publications with lower scores could see a decrease in distribution while there will also be an emphasis on promoting local news. Mark Zuckerberg, Facebook CEO, recently writing on the same subject said that prioritizing news from trusted publishers is part of Facebook’s broader effort to revamp the News Feed and “encourage meaningful social interactions with family and friends over passive consumption.”

Facebook Maybe Losing the War on Hate Speech

Can Facebook actually keep up with the hate speech and misinformation that pores through the portal? Facebook seems to be working on the misinformation side. But even that is coming in fits and starts. Facebook had to retreat from using red flags that signal articles are fake news after discovering the flags instead spurred people to click on them or share them and has gone instead to listing links below the article to related articles with a more balanced view.

 

Now a new investigation from ProPublica shows that Facebook’s policy regarding hate speech is also having issues. In an analysis of almost 1,000 crowd-sourced posts, reporters at ProPublica found that Facebook fails to evenly enforce its own community standards. A sample of 49 of the 900 posts were sent directly to Facebook, which admitted that in 22 cases its human reviewers had erred, mistakenly flagging frank conversations about sexism or racism as hate speech. The problem is that the company also does not offer a formal appeal process for decisions its users disagree with so seemingly innocent outbursts may also get caught up in the reviewer’s net.

 

It is definitely a tough issue and this year Germany will enforce a law requiring social-media sites—including Facebook, YouTube, and Twitter, but also more broadly applying to Web sites like Tumblr, Reddit, and Vimeo—to act within 24 hours to remove illegal material, hate speech, and misinformation. Failure to do so could lead to fines of up to 50 million euros, or about $60 million. Is this this what should be done here in the US or is that too strict? Perhaps the topic of policing content is a good dinner table discussion to have with your teens?

Pages