Social Science Research Council Research AMP Just Tech
Citation

A reasoning based explainable multimodal fake news detection for low resource language using large language models and transformers

Author:
LekshmiAmmal, Hariharan RamakrishnaIyer; Madasamy, Anand Kumar
Publication:
Journal of Big Data
Year:
2025

Nowadays, individuals rely predominantly on online social media platforms, news feeds, websites, and news aggregator applications to acquire recent news stories. This trend has resulted in an increase in the number of available social media platforms, online news feeds, and news aggregator applications. These news platforms have been accused of spreading fake news to gain more attention and recognition. Earlier, this misinformation or fake news used to be propagated only in the text form. However, with the advent of technology, now it is spread in multimodal forms, such as images with text, videos, and audio with textual content. Currently, the automatic fake news detection models are focused on high resource languages and superficial output. Social media users need clarity and reasoning when it comes to identifying fake news, rather than just a superficial classification of news as fake. Providing context, reasoning, and explanations can help users understand why certain news is misleading or false. Hence, a multimodal system has to be developed to identify and justify fake news. In this proposed work, we have developed a multimodal fake news system for the Low Resource Language Tamil with reasoning-based explainability. The dataset for this proposed work is retrieved from fact-check websites and official news websites. We have experimented with different combinations of models for visual and text modalities. Further, we integrated LLM-based image descriptions into our model with the text and visual features, resulting in an F1 score of 0.8736. We used the Siamese model to determine the similarity of the news and its image descriptions. Additionally, we conducted error analysis and used explainable artificial intelligence to explore the reasoning behind our model’s predictions. We also present the textual reasoning for the model’s predictions and match them with images.