Social Science Research Council Research AMP Just Tech
Citation

AraBLA: cross-layer enhanced AraBERT with BiLSTM and attention for Arabic fake news detection

Author:
Alrayzah, Asmaa; Alqhtani, Samar M.; Alrashidi, Bedour
Publication:
Scientific Reports
Year:
2026

The rapid growth of misinformation in Arabic digital media has created an urgent need for more accurate detection systems. Many existing approaches, whether based on traditional machine learning or transformer-based models, struggle with the linguistic complexity of Arabic and the variability of news sources. To address this challenge, we propose Arabic BERT with Layer-wise BiLSTM and attention (AraBLA), a model that extends AraBERT through layer-wise BiLSTM refinement, cross-layer multi-head attention, and learned importance pooling. This architecture leverages all twelve AraBERT layers, enabling the model to capture both deep semantic representations and sequential patterns that are often overlooked by standard transformer classifiers. AraBLA was evaluated on four datasets, AFND, ANS, AraNews, and Covid19Fakes, which cover long-form news articles, stance-annotated claims, machine-generated content, and short social media posts. Across all datasets, the model demonstrates consistent performance gains. On AFND, AraBLA achieves 0.95 accuracy and a 0.95 F1-score, outperforming the published state-of-the-art model by approximately two percentage points. On ANS, it attains 0.77 accuracy and a 0.70 F1-score, again surpassing the strongest available transformer baselines. For AraNews and Covid19Fakes, the model delivers the highest accuracy and competitive recall, improving F1-scores by roughly two points over prior CNN and transformer-based approaches. These findings indicate that integrating hierarchical transformer representations with BiLSTM refinement and cross-layer attention provides a more effective and adaptable solution for Arabic fake news detection across diverse domains.