Misinformation is one of the most critical issues of recent years, which does harm to democracy, economics, and society. Despite all the attempts, traditional techniques are not powerful enough to address new challenges arising from the 4Vs (volume, variety, velocity, veracity) of Big Data. First, large volumes of data on social platforms are generated at unprecedented and ever- increasing scales. Existing misinformation detection techniques are designed for the conventional scale datasets, struggling to meet the requirements of scalability and storage. Second, social data and Web data involve a great variety of data formats in different modalities: texts, images, videos and arbitrary combinations of them. Third, data are generated in real time and continually arrives in the form of streams, facilitating the propagation of misinformation and fake news beyond control when they are detected. Fourth, the recent advances of AI-fabricated attacks like text synthesization, fake image generation and DeepFake videos create an additional layer of biases, noises, and abnormality in user behavior and content data. These challenges call for timely and robust techniques in monitoring, detecting, and mitigating misinformation by advancing topics in data management, data integration, data provenance, data quality, and stream processing.
Misinformation management techniques also need to work together with people, whose domain knowledge is on-par with the most complex AI techniques, and who must validate the automatic output for fairness and transparency. Recent human-in-the-loop platforms for such validation including Amazon Mechanical Turk and Snopes are growing in scale and expertise domains. At the same time, data management in these systems has become a new challenge with the expensive and slow-paced human labour. New data models and algorithms are needed to use human labour wisely and take into account the cognitive and physiological characteristics of the people involved.
This special issue seeks high-quality and original contributions that advance the concepts, methods, and theories of misinformation detection as well as address the mechanisms, strategies and techniques for misinformation interventions. All contributions should clearly address the knowledge gaps indicated in the literature and will be peer-reviewed by the panel of experts associated with relevant fields. We particularly welcome benchmarks, performance evaluation, testbeds for reproducibility validation.