During the 2015 Canadian federal election campaign, a number of candidates withdrew after compromising videos and social media posts they had made in the past were made public online. While the content varied, the videos and posts had at least one thing in common: the candidates did not deny that the content was real.
Now, however, because of the rapid development of what is known as “deep fake technology,” Canadians might not necessarily be able to trust the videos they see or the audio clips they hear.
Deep fake technology, according to one definition, “leverages machine-learning algorithms to insert faces and voices into video and audio recordings of actual people and enables the creation of realistic impersonations.” Put more simply, the technology “makes it possible to create audio and video of real people saying and doing things they never said or did.”
Once only the domain of the artificial intelligence research community, deep fake technology first attracted public attention in December 2017 when an anonymous user on the online forum Reddit, who went by the name “Deepfakes,” started posting synthetic pornographic videos in which the faces of celebrities were convincingly superimposed onto those of the original actors and actresses.4 Around the same time, the same user released a software kit that allowed others to make their own synthetic videos.5 The Defence Advanced Research Projects Agency ( DARPA) – an agency of the United States Department of Defence – is of the view that now even the relatively unskilled can “manipulate and distort the message of the visual media.”