The ViDSS Blog


Beyond the known – my PhD journey with deepfakes

As my PhD journey comes to an end, I am often reminded of a graphic a professor once sketched to illustrate the journey of adding to human knowledge. Picture all human knowledge as a vast circle, and at the beginning of your academic journey, you’re merely a speck in the centre. As you progress through your bachelor’s and master’s degrees, that speck grows, inching closer to the outer rim. But you are still far away from the boundary of human understanding. However, it’s during the PhD journey that you embark on a gradual expansion, exploring a specific direction until you reach the very edge. There, like a minuscule pin, you push against the boundary of what we know. It’s taken me three years to now be able to say that I made a tiny dent to this very boundary.

Over the past three years, my focus has been on deepfakes, which are hyper-realistic audio-visual manipulations created by artificial intelligence. In simple terms, deepfakes can make it seem like someone is saying or doing something they never actually did. They are regarded as a game-changer in the world of digital deception, thrusting us into uncharted territory. In my field of political communication research, deepfakes fall under the category of disinformation because they follow the malicious intent to deceive. However, when I started my PhD journey, the literature was a bit vague on how visuals fit into the disinformation landscape. So, my first task was to develop a theoretical framework, together with Sophie Lecheler, for understanding visual disinformation—a small step towards expanding our knowledge.

Moreover, there has been a lot of buzz and concern about deepfakes, especially in political circles. People have painted all sorts of scary scenarios about deepfakes disrupting elections, but in reality, there haven’t been many confirmed cases. Therefore, Sophie Lecheler and I conducted an in-depth interview study with fact-checkers, who are on the front lines of battling mis- and disinformation. They surprised us by saying that deepfakes weren’t really on their radar yet. Instead, they’re more worried about simpler, but still deceptive, tricks like taking videos out of context or using low-quality fakes.

Now I was curious about the effects of deepfakes, so I conducted two survey-experiments. In the first, which I co-authored with Jana Laura Egelhofer and Sophie Lecheler, we compared deepfakes to the other, simpler forms of visual disinformation that the fact-checkers have highlighted. Surprisingly, participants didn’t buy into the credibility of the deepfake portrayal, but they still formed misperceptions about the politician and viewed her more negatively. This finding reshaped our understanding of deepfakes’ impact. People may not find them believable – but they can still believe in what they portray.

Digging deeper Hannah Greber and Alina Nikolaou and I aimed to unravel the abstract notion of whether ‘seeing is no longer believing’ in the realm of deepfakes. We designed an experiment encompassing various modalities—audio, video, and 360°. The outcome was consistent across all formats: after exposure to the deepfake, participants attributed less credibility to audio-visual media overall. Put simply, they doubted the truthfulness of these media forms.

Writing it all down like this makes me think that I’ve managed to make a relevant contribution to our understanding of deepfakes. However, it also leaves me with many unanswered questions. I refrain from providing a definitive answer to the question of whether deepfakes are inherently dangerous or not. But I will say this: deepfakes represent just one of many ways to manipulate public perception through visuals, and we should consider all of them. Deepfakes are a phenomenon that is constantly evolving. While fact-checkers hadn’t come across many during our conversations, more examples have occurred in the recent past, indicating a trend that requires close monitoring. Additionally, deepfakes might lack credibility, especially if they are not created by expert programmers. Nevertheless, they can still have strong cognitive effects, impacting misperceptions. Lastly, to a certain degree, seeing is no longer believing amid deepfakes. However, we need much more theoretical testing to validate this assertion. For now, I will leave you with a painting by one of my favourite artists – David Shrigley. (12.02.2024, Teresa Weikmann)

ViDSS student and sowi:doc Fellow Teresa Weikmann is currently finishing her doctoral thesis about deepfakes at the Department of Communication. (© Teresa Weikmann)