ON MARCH 2, the Ukraine government’s Center for Strategic Communication warned that its enemies might be preparing a “deepfake” video that appeared to show president Volodymyr Zelensky announcing his surrender to Russia’s invasion. On Wednesday, that warning appeared prescient.
A fake video emerged on Facebook and YouTube in which a strangely motionless version of Zelensky asked Ukrainian troops to lay down their weapons in a voice different from his usual tone. The clip was also posted to Telegram and Russian social network VKontakte, according to the US think tank the Atlantic Council. TV Channel Ukraine 24 said hackers defaced its website with a still from the video and inserted a summary of the fake news into a broadcast’s scrolling chyron.
Minutes after the TV station posted about the hack, Zelensky himself posted a Facebook video denying that he had asked Ukrainians to lay down their arms and calling the fake a childish provocation. Nathaniel Gleicher, head of security policy at Facebook’s owner Meta, tweeted that the company had removed the original deepfake clip for violating its policy against misleading manipulated media. A statement provided by Twitter spokesperson Trenton Kennedy said the company was tracking the video and removing it in cases where it breached rules banning deceptive synthetic media. YouTube spokesperson Ivy Choi said it also had removed uploads of the video.
That short-lived saga could be the first weaponized use of deepfakes during an armed conflict, although it is unclear who created and distributed the video and with what motive. The way the fakery unraveled so quickly shows how malicious deepfakes can be defeated—at least when conditions are right.
The real Zelensky benefited Wednesday from being part of a government that had prepared for deepfake attacks. His quick response with a video debunking it, and the nimble reaction from Ukraine 24 and social platforms, helped limit the time the clip could spread uncontested.
Those are textbook strategies for defending against a threat as new as political deepfakes. Preparation and rapid response were at the heart of a playbook for defeating deepfakes that the Carnegie Endowment for International Peace released for political campaigns ahead of the 2020 US presidential election.
Zelensky also benefited from his position as one of the highest-profile people in the world and the deepfake’s poor quality. The deepfake presidential double looked unnatural, with a face that didn’t match its body, and its voice sounded different from that of its target.
Lire l’article complet sur : www.wired.com
Leave A Comment