DeFaking Deepfakes: Understanding Journalists' Needs for Deepfake Detection

Abstract

Although the concern over deliberately inaccurate news is not new in media, the emergence of deepfakes—manipulated audio and video generated using artificial intelligence—changes the landscape of the problem. As these manipulations become more convincing, they can be used to place public figures into manufactured scenarios, effectively making it appear that anybody could say anything. Even if the public does not believe these are real, it will generally make video evidence appear less reliable as a source of validation, such that people no long trust anything they see. This increases the pressure on trusted agents in the media to help validate video and audio for the general public. To support this, we propose to develop a robust and an intuitive system to help journalists detect deepfakes. This paper presents a study of the perceptions, current procedures, and expectations of journalists regarding such a tool. We then combine technical knowledge of media forensics and the findings of the study to design a system for detection of deepfake videos that is usable by, and useful for, journalists.

paper

Contact Me