Fake news

by

Traditional media is no longer the only source of news. Internet and social media have become the main platform for news sharing.
According to Pew Research Center 62% of Americans get their news from social media. This means anyone can create and share news stories, regardless of their truthiness.
Fake news can have a great role in the global perception of important issues and can have a decisive in role in determining events like elections. This particularly apply to non tech people that still live in the paradigm ‘if it is on the news it must be true’. Many people, especially older generations, do not realize that anyone can publish stories on social media unlike traditional media where news come from reputable organizations.
The issue is so compelling that companies like Google and Facebook started tackling this challenging matter, even if drawing the line between fake news and freedom of speech is not always easy.

Google Fact Checker

Since last October, along with Jigsaw, Google started enabling publishers to show a “Fact Check” tag in Google News for news stories. It is a label identifying articles that include information fact checked by news publishers and fact-checking organizations.
Results from a Google search containing fact checks for one or more public claims show that information clearly on the search results page. The snippet displays information on the claim, who made the claim, and the fact check of that particular claim.
This information is not available for every search result, and there may be search result pages where different publishers checked the same claim and reached different conclusions. These fact checks are presented so people can make more informed judgements. For publishers to be included in this feature, they must use the Schema.org ClaimReview markup on the specific pages where they fact check public statements, or they can use the Share the Facts widget developed by the Duke University Reporters Lab and Jigsaw.
If you have a web page that reviews a claim made by others, you can include a ClaimReview structured data element on your web page. This element enables Google Search results to show a summarized version of your fact check when your page appears in search results for that claim.

Imagine a page that evaluates the claim that the earth is flat. Here is what a search for “the world is flat” might look like in Google Search results if the page provides a ClaimReview element:

The structured data will look something like this on the page that hosts this fact check:

{
 "@context": "http://schema.org",
 "@type": "ClaimReview",
 "datePublished": "2016-06-22",
 "url": "http://example.com/news/science/worldisflat.html",
 "itemReviewed": {
  "@type": "CreativeWork",
  "author": {
   "@type": "Organization",
   "name": "Square World Society"
  },
  "datePublished": "2016-06-20"
 },
 "claimReviewed": "The world is flat",
 "author": {
  "@type": "Organization",
  "name": "Example.com science watch",
  "sameAs": "https://example.flatworlders.com/we-know-that-the-world-is-flat"
 },
 "reviewRating": {
  "@type": "Rating",
  "ratingValue": "1",
  "bestRating": "5",
  "worstRating": "1",
  "alternateName" : "False"
 }
}

Only publishers that meets Google’s standards as an authoritative source of information qualify for inclusion. If a publisher or fact check claim does not meet such standards, Google can at their our discretion, ignore that site’s markup.

Facebook fact checking

Facebook also recently introduced a feature that flags certain posts as “disputed content”. They started rolling out this new feature on March, as part of a partnership with a group of external fact-checking sites, including Snopes.com, ABC News, and Politifact.
When a user tries to share links that have been marked as questionable, an alert pops up that says the story in question has been disputed. The alert links to more information about the fact-checking feature and says that “sometimes people share fake news without knowing it.”

If the user continues to share the link or story anyway, the link is supposed to appear in the news-feeds of other users with a large note that says “disputed”, and lists the organizations that flagged it as fake or questionable.

However, many Facebook critics believe fake news are spread rapidly by the site’s news-feed algorithm.
In a number of cases, the fake-news warning is either being applied too late—after a story has already gone viral and been shared by large numbers of people, or is having the opposite effect. Sometimes the ‘disputed’ flag actually increase the in traffic and shares, since people share thinking Facebook is trying to silence such post.

One of the problems with the kind of fact-checking process Facebook has implemented, is that it only works if users trust both the social network and the third-party fact-checkers that it has partnered with.
If a person doesn’t trust a specific information source, then arguments made by that source about the inaccuracy of a story can actually convince the person of the opposite, even if the source has facts and evidence to support their argument.
Some sociologists argued that Facebook and Google can’t solve the fake news problem because it is being driven by human nature and a clash of cultures, and that can’t be changed through argument or the presentation of facts.

AI to the rescue

Due to the low cost of sharing information, far more operators are involved in spreading news, making it nearly impossible to check and regulate all false news sources.
One approach uses machine learning to analyze text and assign it a score that represents its likeliness to be fake news. To increase transparency, these scores are broken down into several components that explain the rating.
Their work is a cross-discipline endeavor because solving the fake news crisis rests not only on AI, but also requires social and political inputs.
AI can detect the semantic meaning behind a web story by analyzing the headline, the subject, the geolocation, and the main body text. Natural language processing engines can look at these factors to determine how one site’s coverage compares with how other sites are reporting the same facts, as well as how mainstream media sources are handling it.
The Fake News Challenge (www.fakenewschallenge.org) is a platform that fights the battle against fake news. It starts off with a Stance Detection process that examines the perspective of the news article in question relative to another take on the topic. For example, it detects whether two headlines agree or contradict each other.
Yet AI is not completely up to the job yet. While tools like those of Fake News Challenge can call shenanigans on narrowly scoped news articles such as “US unemployment went up during the Obama years,” a more sophisticated headline such as “The Russians under Putin interfered with the US Presidential Election” is still beyond what AI and machine learning can call out as false or correct.
According to Fake News Challenge, “It won’t be possible to fact check automatically until we’ve achieved human-level artificial intelligence capable of understanding subtle and complex human interactions, and conducting investigative journalism.” However, an automated system will certainly make parts of the job much easier and more efficient for human fact-checkers.

Criticism of using AI to detect fake news

Some criticisms about AI detecting fake news is that the producers of fake news articles can use the AI algorithms to manipulate the analysis of their works, and that false positives could identify real news stories as fake.
People could care about whether news matches up with their own worldview rather than whether it is fake or not. Fake news is primarily based on the emotional response of its readership.
One thing is certain, fake news is threatening the degree to which people are informed about world events. There is a role for AI to play in separating fact from fiction when it comes to news stories. The question remains whether readers still care about the difference.
With sarcasm, there is a big gap between what people say and what they mean, since computers tend to take everything literally.
For example, “You look wonderful,” can mean two very different things depending on the context and the speaker. It could mean that you do, in fact, look great, or it could mean the opposite.

Conclusion

Even if fake news is a very challenging topic and AI and machine learning are not able to fully solve the problem yet, they can definitely do the heavy lifting and can hugely help to at least mitigate the problem.
There are cases where drawing the line between truth and opinion is not easy or impossible, regardless AI or human. However, in a lot cases when verifying facts is fairly easy and news are blatantly fake, AI is definitely the right path to follow.

Leave a Reply

Your email address will not be published. Required fields are marked