One of the growing problems today is misinformation: the proliferation of fake news and misleading content across social media platforms. While artificial intelligence (AI) helps in its spread, there has been growing proof of how it can be used to curb this problem.
However, more than just the daily news article, misinformation has far-reaching – and often fearsome – implications in more critical fields such as cybersecurity, public safety, medicine, and even science. In fact, there have been published collaborative papers, one appearing in the April 2021 issue of PNAS, tackling misinformation as a result of common human biases and prevailing practices in the critique and release of scientific papers. This even includes respected, peer-reviewed journals.
Now, a new study involving researchers from the University of Maryland, Baltimore, is examining an emerging method of misinformation within the scientific community. They report that it is possible for AI systems to generate misinformation that could fool even experts in fields like medicine and defense, creating materials that are convincing enough.