News Item

Recognizing AI’s misinformation | College of Engineering at Carnegie Mellon University

What does cybersecurity have to do with human recognition of false data? A lot, says MechE’s Conrad Tucker, who teamed up with Challenger Center and RAND to study the link between the two.

There’s no doubt about it: we are living in the information age. Anyone can search the internet for whatever information their heart desires. Unfortunately, this creates a perfect opportunity for misinformation to be widely spread. But how susceptible is the public to this misinformation, especially in the age of COVID-19 when people have become exceedingly reliant on online content for both learning and other professional tasks? And what challenges exist in protecting people and computer networks from being misled by AI-generated misinformation?

Like people, computers must also be trained to spot threats. Their threats, however, come in the form of malware—computer viruses—which can be considered a form of misinformation.

Conrad Tucker, a professor of mechanical engineering, has teamed up with Challenger Center and the RAND corporation to investigate the link between AI-generated threats posed to humans and computers in the digital landscape. Challenger Center will explore how K-12 students identify misinformation, CMU with college students, and RAND with teachers and other adults. This project will also introduce students to learning about AI.

[…]

Source: Recognizing AI’s misinformation – College of Engineering at Carnegie Mellon University