New AI tool can flag fake news for media outlets

Researchers have developed a new artificial intelligence (AI) tool that can help social media networks and news organisations weed out false stories. The tool, developed by researchers at the University of Waterloo in Canada, uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.

“If they are, great, it’s probably a real story. But if most of the other material isn’t supportive, it’s a strong indication you’re dealing with fake news,” said the study’s researcher Alexander Wong, Professor at Waterloo University.

According to the study, presented at the ”Conference on Neural Information Processing Systems” in Canada, researchers were motivated to develop the tool by the proliferation of online posts and news stories that are fabricated to deceive or mislead readers, typically for political or economic gain.

Their system advances ongoing efforts to develop fully automated technology capable of detecting fake news by achieving 90 per cent accuracy in a key area of research known as stance detection.

Given a claim in one post or story and other posts and stories on the same subject that have been collected for comparison, the system can correctly determine if they support it or not nine out of 10 times. That is a new benchmark for accuracy by researchers using a large dataset created for a 2017 scientific competition called the Fake News Challenge.

While scientists around the world continue to work towards a fully automated system, the Waterloo technology could be used as a screening tool by human fact-checkers at social media and news organisations, said the study.

“It augments their capabilities and flags information that doesn’t look quite right for verification, it isn’t designed to replace people, but to help them fact-check faster and more reliably,” Wong said.

AI algorithms at the heart of the system were shown tens of thousands of claims paired with stories that either supported or didn’t support them. Over time, the system learned to determine support or non-support itself when shown new claim-story pairs.

“We need to empower journalists to uncover truth and keep us informed,” said study researcher Chris Dulhanty. “This represents one effort in a larger body of work to mitigate the spread of disinformation,” Dulhanty added.

Alexander WongArtificial IntelligenceChris Dulhantyfake newsWaterloo University
Comments (1)
Add Comment
  • hari.prasad

    My biggest concern is how these tools are going to handle unwritten history. If there is no movie no best selling book and some good/bad story from our village which only local know then how to Validate that even main stream media with all resources fail to do that. Reasons are many and we can know those. How to ensure this tool is biased towards truth and not doing just balancing act.

    This is going to be a next facebook or google of the world. But it will be authority of truth. How to ensure it is hot high jacked by leftist like fb, Twitter and even wiki and Google system has been.

    Another concern is content vs intent. People react to intent but they listen or read content. Words can be manipulated in many ways and whole intent can be changed. In court one can argue the meaning of one’s text and align the intent due to fear of punishment. But normally one may enjoy controversy.