You are here

Twitter Using New Way to Counter Misinformation

Twitter has announced that user behavior, not simply tweet content, will now be a factor in the way conversations are modified, or even blocked from general consumption. Content from users could be demoted by the platform's algorithm if the users have been blocked frequently, if they have multiple accounts using the same IP address, or if they regularly tweet to a large number of accounts that they don't follow.

Intel Contest Includes Project on Misinformation

About 1,800 students from 81 countries competed in the recent Intel International Science and Engineering Fair, held this year in Pittsburgh. One high-school student's project was aimed at combating fake news after he says he nearly fell for a false headline on Facebook. Ryan Beam says he almost believed a headline about the Pope endorsing Donald Trump (and indeed, it was untrue), which got him to thinking of ways to look at the fake news situation for himself. His study found that people who identified themselves as Independents were the least likely to share misinformation.

Twitter is a Disaster During a Disaster

This may not surprise you, but during disasters, Twitter is full of false information. When confronted with falsehoods, “86 to 91 percent of the users spread the false news by retweeting or liking,” reports a new study from the University at Buffalo. “To the best of our knowledge, this is the first study to investigate how apt Twitter users are at debunking falsehoods during disasters,” said Jun Zhuang, associate professor of Industrial and Systems Engineering at the school, and the lead author of the study. “Unfortunately, the results paint a less-than-flattering picture”.

Adrift in the Fake News Era

National Public Radio recently interviewed the founder of Wikipedia, Jimmy Wales, about his thoughts on fake news online. It is interesting to remember that Wikipedia used to be the sole subject of so many teachers’ ire because students often used information from the site in their reports without vetting it, but now that seems almost quaint in the face of other fake news scandals.

Wales feels we all need to be skeptical of the sources of things we share online. He said many times people will find a story that confirms what they already believe about a particular subject, so they go ahead and share it. But the truth is, anyone could have written that article, and without a quick google search to vet the sources, you could just be perpetuating the problem.


Twitter and Facebook Support the Honest Ads Act

Twitter and Facebook are supporting the Honest Ads Act, a proposal that calls for platforms with a minimum of 50 million users to retain and make available for public viewing all political ads bought by groups investing more than $500. The bill would also require political ads to be labeled with "clear and conspicuous" disclosures and would require that platforms make "reasonable efforts" to prevent ads from being bought by foreign agencies.

Hamilton 68

Are you curious about the Russian social media disinformation campaigns that have been a hot topic in the news recently? The Hamilton 68 dashboard tracks Russian social media in real time as it appears on Twitter. Named after Alexander Hamilton's Federalist Paper 68 (on the importance of "protecting America’s electoral process from foreign meddling"), the dashboard initially tracked election-related tweets but has since expanded to additional topics, such as the Parkland school shooting. It is an interesting tool to look at with your kids when talking about misinformation online.

Curious About How Conspiracy Theories Get Spread Online?

The latest online attacks against the teen survivors of the Parkland shooting is a good case study on how this happens and how quickly it occurs. An article in The Washington Post entitled We studied thousands of anonymous posts about the Parkland attack – and found a conspiracy in the making outlines the part that anonymous social media forums play in the process. It’s a primer on how misinformation is created on purpose, endures endlessly, and the havoc that it plays in lives of those who are targeted.

Who Sponsored That Ad?

The Federal Election Commission has drawn up a proposed framework that would require political digital and social media ads to adopt the same sponsorship disclaimer rules as those appearing on TV, read on radio and in print. Political audio and video ads on both social and digital platforms would require candidates paying for the ads to say their names and include the statement, "And I approve of this message," and graphic and text ads would have to display the sponsor's name "in letters of sufficient size to be clearly readable," the proposal says. In addition, Facebook has announced that it will mail postcards to political ad buyers to verify that they live in the US. A code from the postcard will be needed to buy a political ad on the platform, and November's midterm elections will be the first time the process is used.

Facebook Users Vet New Sources

Facebook's latest news feed update will include a prioritization of news sources rated as trustworthy by "a diverse and representative sample" of its users, the company's News Feed chief Adam Mosseri wrote in a recent blog post. Publications with lower scores could see a decrease in distribution while there will also be an emphasis on promoting local news. Mark Zuckerberg, Facebook CEO, recently writing on the same subject said that prioritizing news from trusted publishers is part of Facebook’s broader effort to revamp the News Feed and “encourage meaningful social interactions with family and friends over passive consumption.”

Facebook Maybe Losing the War on Hate Speech

Can Facebook actually keep up with the hate speech and misinformation that pores through the portal? Facebook seems to be working on the misinformation side. But even that is coming in fits and starts. Facebook had to retreat from using red flags that signal articles are fake news after discovering the flags instead spurred people to click on them or share them and has gone instead to listing links below the article to related articles with a more balanced view.


Now a new investigation from ProPublica shows that Facebook’s policy regarding hate speech is also having issues. In an analysis of almost 1,000 crowd-sourced posts, reporters at ProPublica found that Facebook fails to evenly enforce its own community standards. A sample of 49 of the 900 posts were sent directly to Facebook, which admitted that in 22 cases its human reviewers had erred, mistakenly flagging frank conversations about sexism or racism as hate speech. The problem is that the company also does not offer a formal appeal process for decisions its users disagree with so seemingly innocent outbursts may also get caught up in the reviewer’s net.


It is definitely a tough issue and this year Germany will enforce a law requiring social-media sites—including Facebook, YouTube, and Twitter, but also more broadly applying to Web sites like Tumblr, Reddit, and Vimeo—to act within 24 hours to remove illegal material, hate speech, and misinformation. Failure to do so could lead to fines of up to 50 million euros, or about $60 million. Is this this what should be done here in the US or is that too strict? Perhaps the topic of policing content is a good dinner table discussion to have with your teens?

No More Red Flags on Facebook

Facebook is getting rid of the red flags that signal articles are fake news after discovering the flags instead spurred people to click on them or share them. The company is instead including related links under such articles that will provide more trustworthy sources reporting on the topic. The “related articles” effort is something Facebook started testing earlier this year. By the way, if you do try to share posts with contentious content, a message will pop up telling you that you may out to check out other sources before you do so. Or in other words, you won’t be able to use the excuse that you had ‘no idea” that article you passed on might have false or unproven content.

Former President Obama Talks to Prince Harry About Social Media

Former President Barack Obama and the United Kingdom's Prince Harry took to the airwaves for a recent BBC interview where they discussed the potential dangers of social media and how it should be used to promote diversity and find common ground. "One of the dangers of the internet is that people can have entirely different realities. They can be cocooned in information that reinforces their current biases," Obama stated. The former president also echoed something that parents concerned about their kids growing up in a Digital Age try to communicate to their children reiterating that " the truth is that on the internet everything is simplified and when you meet people face to face it turns out they are complicated." Perhaps, something every cyberbully should remember?

Snapchat Takes Aim at Misinformation

Snapchat is taking aim at misinformation with some unconventional changes to the design of the app (which for many parents is an app that has been associated with cyberbullying and sexting in the past). While the app will still initially open to the phone camera, allowing users to make and share photos that disappear with friends, the new design will try to separate personal (social) side of the app from what is produced by outside media sources. The media part will also be vetted and approved by Snap, the parent company, by humans, not by algorithms. The use of human curators will allow Snapchat to also program content to make sure that users’ preferences are not keeping them from seeing a wide array of opinions and ideas. In addition to winnowing out fake news, this may keep Snapchat from becoming a place that reinforces narrow sets of thinking. This approach is in contrast to Facebook and Google, who have not vetted much of the hate speech, fake news, and even disturbing videos aimed at children that has been proliferated on those platforms over time.

The Trust Project and Fake News

Still worried about falling into a “fake news” trap by reading or passing along something that isn’t factual? A non partisan effort, by a group hosted at Santa Clara University, called The Trust Project is working to address this situation by helping online users distinguish between reliable journalism and promotional content or misinformation. Recently, Facebook started offering “Trust Indicators” which is a new icon that will appear next to articles in your News Feed. When you click on this icon, you can read information the publisher has shared about their organization’s “ethics and other standards, the journalists’ backgrounds, and how they do their work,” according to an announcement from The Trust Project.

It is a work in progress with Facebook, Google, Bing and Twitter and other international new organizations committing to displaying these indicators, although not all implementations are in place.

The onus to figure out if something is fake though is still on the user. Instead of labeling content as disputed, Trust Indicators allow users to learn more about the organization behind the news and come to their own conclusions about the content. Whether it will actually help in the long-run, of course, remains to be seen.

Bunk – The History of Plagiarism, Hoaxes and Fake News

We continue to need to talk to kids about how to evaluate sources online and off, but we all should probably know more about the history of the hoaxes, plagiarism and fake news. A new book entitled Bunk – The Rise of Hoaxes, Humbug, Plagiarists, Phonies, PostFacts, and Fake News by Kevin Young draws connections between the days of P.T.Barnum and the 21st century and compares terms like swindler and confidence man to contemporary buzzwords like plagiarismtruthiness and fake news. More than just telling tales of hoaxes revealed, Young discusses the theory of the hoax and the effects of the deception on politics, online news and everyday life then and now

The Status of Fake News

According to a new survey by Pew Research Center entitled, “The Future of Truth and Misinformation online”, most Americans suspect that made-up news is having an impact. About two-in-three U.S. adults (64%) say fabricated news stories cause a great deal of confusion about the basic facts of current issues and events. This sense is shared widely across incomes, education levels, partisan affiliations and most other demographics. Though they sense these stories are spreading confusion, Americans express a fair amount of confidence in their own ability to detect fake news, with about four-in-ten (39%) feeling very confident that they can recognize news that is fabricated and another 45% feeling somewhat confident.

Some Americans also say they themselves have shared fake news. Overall, 23% say they have shared a made-up news story, with 14% saying they shared a story they knew was fake at the time and 16% having shared a story they later realized was fake. When it comes to how to prevent the spread of fake news, many Americans expect social networking sites, politicians and the public itself to do their share. Fully 45% of U.S. adults say government, politicians and elected officials bear a great deal of responsibility for preventing made-up stories from gaining attention, 43% say this is the public’s responsibility, and 42% say it is part of the job of social networking sites and search engines.

Misinformation – How Facts and Fiction Intermingle on Social Media

Now that nearly two-thirds of Americans get at least some of their news from social media, we all need to stop and think about how our biases and our exposure to misinformation affects the way we perceive the news and even how we fight against false claims. The New York Times recently featured an article entitled How Fiction Becomes Fact on Social Media that focuses on just those concerns.

The article reminds us that it is our, often subconscious, psychological biases that make so many of us vulnerable to misinformation. Skepticism about what we read as “news” online is a good start. However, our own innate biases will let certain things pass as “likely,” researchers have found. We all need to remember that Facebook, Google, and Twitter have their own skin in the game and that they are serving up “juicy” news and information that keeps us coming back for more. It’s so easy to pass along stories before you have a chance to really think about them or look at the source. Repetition can also make a story seems credible if you read the same news headline over and over again. As one expert put it, “We overweight information from people we know.” This Sounds like the way news was passed around back in high school, doesn’t it?

Turning Your Kids Into Web Detectives

While kids are great at signing up for and using social media, chances are they are not very good at evaluating and vetting the news and other information that appears in their online feed. So what are some fact-checking resources your kids (and you!) can use to verify or debunk the information they find online? Some of you may have heard of sites like FactCheck.org, PolitiFact.com, and Snopes.com. The first two are mostly interested in truth in politics. Snopes is famous as a site to check out internet rumors. One you may not be familiar with is OpenSecrets.org, a nonpartisan organization that tracks the influence of money in U.S. politics and is probably aimed more at older students and adults. The last isn’t a site that performs the fact checking, but instead is a tool that lets you fact-check things you find online. The internet “Archive Way back Machine” lets you see how a website looked and what it said, at different points in the past. That can be very valuable to see how, for example, the US government treated different topics under different administrations or at different times under the same administration. Want to see The New York Times’ home page, on just about any day since 1996? It’s there as well as Google’s homepage from 1998. The site can be a little distracting though, so make sure kids know what they are looking for specifically before they dive in.

Evaluating the Quality of Online Information

A newly updated article on the Edutopia site (supported by the George Lucas foundation) on evaluating the quality of resources online is worth reviewing with your kids, especially before they start on any research project. Part of the article addresses how to be a healthy skeptic, providing a particularly helpful list of questions we should all ask ourselves when conducting online research.