Download PDFOpen PDF in browser

Multi-Modal Co-Training for Fake News Identification Using Attention-Aware Fusion

EasyChair Preprint no. 7030

14 pagesDate: November 10, 2021


Rapid dissemination of fake news to purportedly mislead the large user population of online information sharing platforms is a major societal problem. A critical challenge in this scenario is that a multimodal information content, e.g., supporting text with photos, shared online, is frequently created with an aim to attract attention of the readers. While `fakeness' does not exclusively synonymize `falsity' in general, the objective behind creating such content may vary widely. It may be for depicting additional information to clarify. However, very frequently it may also be for propagating fabricated or biased information to purposefully mislead, or for intentionally manipulating the image to fool the audience. Therefore, our objective in this work is evaluating the veracity of a news content by addressing a two-fold task: (1) if the image or the text component of the content is fabricated and (2) if there are inconsistencies between image and text component of the content, which may prove the image to be out of context. We propose an effective attention-aware joint representation learning framework that learns the comprehensive fine-grained data patterns by correlating each word in the text component to each potential object region in the image component. By designing a novel multimodal co-training mechanism leveraging the class label information within a contrastive loss-based optimization framework, the proposed method exhibits a significant promise in identifying cross-modal inconsistencies. The consistent out-performances over other state-of-the-art works (both in terms of accuracy and F1-score) in two large-scale datasets that cover different types of fake news characteristics (defining the information veracity at various layers of details like `false', `false connection', `misleading', and `manipulative' contents), topics, and domains demonstrate the feasibility of our approach.

Keyphrases: attention, attention aware fusion, co-training, Fake News Detection, Fake news Recognition, feature fusion, multi-modal classification, Multimodal Attention, Rumor

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Sreyasee Das Bhattacharjee and Junsong Yuan},
  title = {Multi-Modal Co-Training for Fake News Identification Using Attention-Aware Fusion},
  howpublished = {EasyChair Preprint no. 7030},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browser