This research investigates the effectiveness and reliability of Low-Rank Adaptation (LoRA) for detecting health misinformation. While parameter-efficient fine-tuning (PEFT) methods reduce computational costs significantly, their impact on model factuality remains insufficiently characterized in safety–critical domains. This study implements a targeted configuration within a bidirectional encoder representation model, adapting all attention layers. The results indicate that this approach achieves an accuracy of 85.1% and a Macro F1 score of 85.1%, utilizing only 0.1% of the total trainable parameters. However, our evaluation also identifies a performance-factuality paradox, while LoRA maintains high detection precision, it exhibits an increased susceptibility to hallucinations, particularly as input complexity rises. We observe a measurable increase in predictive entropy when processing sequences exceeding 400 tokens, which we characterize as a semantic bottleneck inherent in low-rank constraints. These findings suggest that while LoRA offers a viable path for efficient misinformation detection, its deployment in healthcare requires specific mitigation strategies for factual integrity. This study provides empirical evidence to guide the development of more reliable and efficient language models for public health communication.
