If manipulated images are used for fake news, this can have enormous consequences.
Creating fake videos and fake images with AI is becoming increasingly easier
Deepfakes are realistic-looking media content created using artificial intelligence (AI) - these can be images, videos or sound recordings.
While just a few years ago an AI needed many image files of a person to create good deepfakes, today a single photo from a publicly accessible social media account is enough to create deceptively real-looking fakes. Thanks to free face apps and freely available browser applications, it is becoming increasingly easier to create and then distribute such fakes.
Face apps use a technique called face swapping, which involves inserting your own face or someone else's face into video clips and movie scenes. Such clips can be created and distributed in astonishingly good quality in seconds. Browser-based AI applications, so-called image synthesizers, generate (photorealistic) images based on text descriptions, so-called prompts, and offer a wide range of image styles. For example, you can use images of prominent people, well-known buildings or landscapes. These technical possibilities take image manipulation to a new level.
Recently, various journalists have caused some confusion on social media. AI-generated images of the Pope as an influencer or of the supposed arrest of former US President Donald Trump circulated, which many users apparently did not recognize as fake - and so went “viral” even though they were marked as such.
What are deepfakes used for?
Deepfakes can be used for various purposes. The possible uses range from satire and entertainment to deliberate disinformation and propaganda to the targeted discrediting of individual people. As the examples above show, deepfakes can be particularly dangerous in connection with fake news, as they can make it even more realistic.
Deepfakes are also used for cyberbullying, for example by mounting the heads of those affected in porn videos and distributing them via messengers.
Fraud with deepfakes is also possible
It is also conceivable that AI could be used fraudulently. Love scammers could, for example, use this technique on dating apps or on online dating sites to try to get other people to share erotic material with them in order to blackmail them. The perpetrators pose as attractive men or women and use deepfake video chats to gain the trust of potential victims. This procedure is called sextortion.
For society, this means that it is becoming increasingly easier for fraudsters to distort images, voices or even texts and change them to suit their needs. This makes it more difficult to distinguish the real thing from the fake. In the future, this must be taken more into account and pointed out more in prevention. But AI is not only used by criminals; police and law enforcement authorities are also increasingly relying on AI-based analysis tools for investigations.
Can deepfakes be detected?
There are now programs that also use artificial intelligence to try to check the authenticity of videos, images and audio content and identify fakes.
However, distinguishing between fakes and real content is sometimes very difficult. There are currently still areas where AI image generators have difficulties, for example when depicting ears and hands or objects that are held in the hand. Light and shadow as well as perspectives or inconsistent movements can also be indicators of fakes.
- Watch out for artifacts : hands often have too few or too many fingers, arms are missing or cut off, ears have a poorly differentiated shape, a “blank” look.
- In the case of videos, inconsistent or too static movement sequences be an indication (e.g. distorted faces/facial expressions or blinking that is too rhythmic).
- Check the perspective : Do the light and shadows ? Are there distortions?
- Check the output or quality of the image : For example, does the face have the same resolution as the body or the background of the image? Is the content of the image or video plausible?
- Plausibility : Check other sources. Can the content presented be independently verified for its truthfulness?
Reputable news sites, fact-checking portals and search engine reverse image searches can be helpful in the check.
If you discover your likeness in a fake
From a legal perspective, deepfakes violate personal rights without the appropriate consent of those affected or can constitute an insult.
If you discover your image in a man-made image or video that was created and distributed without your consent, you may report the content to the platform on which it was posted (typically via a “Report” button or feature). ).
You can also file a police report or seek legal help. To do this, note the URL of the location as well as the date and time of publication.
Please note that depending on the content of the fake, it may be a criminal offense to gain possession of this content. Therefore, do not take screenshots. If you are unsure about the best way to preserve evidence, contact your local police department.
Further information
Further information can be found on the following pages:
Source:
State and federal police crime prevention
Already read? Weather and climate are two terms that are often confused or used interchangeably. However, they actually describe different phenomena. Mimikama explains the difference: Weather and climate: decoding the differences and understanding the interactions between the two natural phenomena
If you enjoyed this post and value the importance of well-founded information, become part of the exclusive Mimikama Club! Support our work and help us promote awareness and combat misinformation. As a club member you receive:
📬 Special Weekly Newsletter: Get exclusive content straight to your inbox.
🎥 Exclusive video* “Fact Checker Basic Course”: Learn from Andre Wolf how to recognize and combat misinformation.
📅 Early access to in-depth articles and fact checks: always be one step ahead.
📄 Bonus articles, just for you: Discover content you won't find anywhere else.
📝 Participation in webinars and workshops : Join us live or watch the recordings.
✔️ Quality exchange: Discuss safely in our comment function without trolls and bots.
Join us and become part of a community that stands for truth and clarity. Together we can make the world a little better!
* In this special course, Andre Wolf will teach you how to recognize and effectively combat misinformation. After completing the video, you have the opportunity to join our research team and actively participate in the education - an opportunity that is exclusively reserved for our club members!
Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )

