Hany Farid (Berkeley) and Matyáš Boháček (University of Prague) are experts in combating so-called deepfakes. These are often deceptively real, computer-generated videos in which faces can be swapped and words can be put into people's mouths that they have never said. A well-known example is a deepfake by Volodymyr Zelenskyj in which the Ukrainian president announces his country's surrender, we reported .
Zelenskyj's protection against deepfakes
It was precisely this incident that Farid and Boháček took as an opportunity to work on a tool that would protect the Ukrainian president from further such infowar attacks. They developed a facial and gestural model of behavior based on the characteristic features of Zelensky's speaking style. To do this, they fed an algorithm with a good eight hours of video footage recorded by the Ukrainian president on four different occasions. The two published their first results in a scientific preprint in June 2022.

While the democratization of access to video manipulation and synthesis techniques has led to interesting and entertaining applications, it has also raised complex ethical and legal questions. Particularly in the context of war, deepfakes pose a significant threat to our ability to understand and respond to rapidly evolving events. While our approach to protecting a single individual – Ukrainian President Zelensky – does not address the broader problem of deepfakes, it does provide a measure of digital protection for arguably the most important Ukrainian voice in this time of war.
Hany Farid and Matyáš Boháček in the text from June 2022
Deepfake of Vitali Klitschko in conversation with the mayor of Vienna
Another deepfake attack related to the Ukraine war was widely reported in the media. Prominent supporters of the Ukrainian cause and especially regional politicians and mayors were contacted by a supposed Vitali Klitschko. While some people saw through the fake very quickly and broke off the conversation, the skeptical mayor of Vienna, Michael Ludwig, held out until the end of the video conference.
Apart from wasted time and malice from the opposition, the conversation had no consequences on Ukraine policy in Austria. The federal government, however, presented an action plan against deepfakes in May. “This will allow us to combat deepfakes even more specifically and effectively in the future. Because our goal is clear: pull the plug on disinformation and hatred on the internet,” explained Interior Minister Gerhard Karner.
The further development: Protection for prominent people and politicians
Not surprisingly, the two scientists have further refined their tools. The system is now supposed to use biometric data, as well as facial expressions and gestures, to recognize the slight deviations that characterize counterfeits, no matter how good they are. interim report , Farid and Boháček speak of an accuracy of 99.99 percent for debunking deepfakes. To do this, the adaptive algorithms like Zelenskyj need a few hours of high-quality video recordings, ideally from different occasions.
We describe an identity-based approach to protecting heads of state from deepfake impostors. Using several hours of authentic video footage, this approach captures distinct facial, gestural, and vocal characteristics that we show can distinguish a head of state from an impersonator or a fake impostor.
Farid and Boháče in “Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms”
Tools to combat this are available to authorities and reputable media
Hany Farid and Matyáš Boháček are aware that any publication of research results on deepfake technology also helps the makers of these videos. They talk about a “cat and mouse game between creator and detector”. At the moment, however, the advantage is on their side because their analyzes of facial, gestural and vocal patterns work with observation time windows of ten seconds. The synthesis programs, on the other hand, create their videos image by image or in short sequences.
Despite this current superiority, the researchers do not want to release their results and methods publicly in order to make counterattacks more difficult. However, they want to make their tool available to reputable news and government agencies working to combat disinformation campaigns sparked by deepfake videos.
Sources: PNAS , arXiv , Federal Ministry of the Interior , glomex , scinexx.de
Fact check: Flight board in Belgrade does not show flights to Kiev, Russia and Kosovo, Serbia
If you enjoyed this post and value the importance of well-founded information, become part of the exclusive Mimikama Club! Support our work and help us promote awareness and combat misinformation. As a club member you receive:
📬 Special Weekly Newsletter: Get exclusive content straight to your inbox.
🎥 Exclusive video* “Fact Checker Basic Course”: Learn from Andre Wolf how to recognize and combat misinformation.
📅 Early access to in-depth articles and fact checks: always be one step ahead.
📄 Bonus articles, just for you: Discover content you won't find anywhere else.
📝 Participation in webinars and workshops : Join us live or watch the recordings.
✔️ Quality exchange: Discuss safely in our comment function without trolls and bots.
Join us and become part of a community that stands for truth and clarity. Together we can make the world a little better!
* In this special course, Andre Wolf will teach you how to recognize and effectively combat misinformation. After completing the video, you have the opportunity to join our research team and actively participate in the education - an opportunity that is exclusively reserved for our club members!
Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )

