AI deepfakes are shaking up the Austrian political landscape and causing unrest before the National Council elections in autumn 2024. The use of AI-generated “deepfakes” and mass fake advertising on platforms such as Facebook is now the order of the day. September 29, 2024 marks the next crucial election date on which citizens should cast their votes. But can this process still be considered safe?

MIMIKAMA
AI generated image / ChatGPT

The looming threat of AI deepfakes and AI counterfeits

The use of deepfake technology has increased dramatically in recent years. This artificially generated content manipulates the appearance and voice of people in videos to make them appear authentic. In an election year like 2024, the proliferation of such technologies is particularly dangerous. Not only can they undermine voter confidence in election integrity, but they can also destabilize the political climate.

The power of counterfeiting online

In the super election year of 2024, social media is full of AI-generated images, AI deepfakes, videos and mass fake advertising that is specifically aimed at politicians. While some of this content is obviously recognizable as satire or jokes, others are so well done that they could easily pass for real. These “softfakes” can make subtle changes to politicians’ facial expressions or statements to influence public opinion.

Click or tap the “+” to view all information.

Note: This is just a small sample of the fake ads currently making the rounds on Facebook. We at Mimikama were able to screenshot around 160 advertisements! It is important to mention at this point that the users who posted these advertisements on Facebook had nothing to do with it. You have fallen victim to a phishing attack yourself.

Examples of current fake advertisements on Facebook AI deepfakes: Danger for the 2024 National Council election in Austria
Examples of current fake advertisements on Facebook AI deepfakes: Danger for the 2024 National Council election in Austria
Examples of current fake advertisements on Facebook AI deepfakes: Danger for the 2024 National Council election in Austria
Examples of current fake advertisements on Facebook AI deepfakes: Danger for the 2024 National Council election in Austria

Attempts at manipulation are becoming more and more perfidious

It is difficult to determine the exact number of people who fall for such deepfakes. Nevertheless, it is clear that these attempts are becoming increasingly sophisticated and difficult to detect. The growing sophistication of AI technology is making it increasingly difficult for voters to distinguish between true reporting and manipulation. This poses a serious threat to the democratic process as citizens may fall for false information, thereby influencing their voting decisions.

How can we protect ourselves?

Mimikama expressly warns of these dangers and calls on all Internet users to critically question what they see online. It is essential to verify the source of information and, when in doubt, look for reliable confirmation. In addition, platforms like Facebook and Co. should be held more responsible in order to prevent the spread of such counterfeits.

1. Why are AI deepfakes a threat to democracy?

AI deepfakes are a threat to democracy because they undermine voters' trust in the political process. This technology can be used to portray politicians in a way that contradicts their real views or actions. At a time when social media is a primary source of news and information, manipulated videos and images can spread quickly and influence public opinion. When voters make decisions based on misinformation, the integrity of the democratic process is compromised. It is therefore crucial that both citizens and platforms actively combat the spread of such content and promote critical thinking.

2. How do you recognize a deepfake?

Deepfakes can be difficult to detect as they become increasingly realistic. However, there are some signs to look out for. This includes inconsistencies in facial expressions or lip movements that do not match the spoken text. Unnatural blinking or strange shadows and lighting in the video can also be clues. The edges of faces often appear blurred or skin tones appear unnatural. It's important to remain skeptical and rely on trustworthy sources, especially when it comes to political content.

3. What role does social media play in the spread of deepfakes and fake advertising?

Social media plays a central role in the spread of deepfakes and mass fake advertising. Platforms such as Facebook and Twitter enable content to be distributed quickly and widely, often without sufficient verification of authenticity. This can result in fake videos and images going viral and reaching large numbers of people before they are exposed as fakes. It is important that social media take responsibility and develop mechanisms to detect and curb the spread of deepfakes and fake advertising. Users should also be vigilant and develop critical media literacy to avoid falling for manipulated content.

4. What are “softfakes” and how do they differ from deepfakes?

“Softfakes” are a more subtle form of AI manipulation compared to deepfakes. While deepfakes often change entire faces or voices, softfakes involve minor adjustments that can still influence perception. This can include removing context, changing facial expressions, or editing background noise to convey a specific message. Softfakes are harder to detect because they often contain less noticeable changes that can still influence viewers' opinions. They are another example of how AI technology can be used to carry out subtle but powerful manipulations.

5. How can we protect ourselves against the spread of false information?

To protect against the spread of misinformation, citizens and platforms alike should be vigilant. It's crucial to check sources carefully and remain skeptical of sensational or unexpected information. Media literacy and critical thinking are essential in order to recognize and question manipulated content. Platforms should develop technical solutions to quickly identify and remove deepfakes and fake advertising. It is also important that users report incidents of disinformation and rely on trusted sources of information to make informed decisions. Promoting informed and critical discourse is key to minimizing the impact of misinformation.

Deepfakes – The Threat to Truth – Andre Wolf from Mimikama (Skepkon 2024)

YouTube

By loading the video, you accept YouTube's privacy policy.
Learn more

Load video

In the digital age, it is more important than ever to protect yourself from AI manipulation and fake news. Artificial intelligence (AI) is increasingly being used to create deceptively realistic images, videos and information that can mislead the public. This guide will help you recognize such manipulations and protect yourself effectively.

Understanding the different types of AI manipulation


Deepfakes
Deepfakes are videos or audio files created by AI technology to make people say or do something they never said or did. These can be very convincing because they mimic the appearance and voice of people.

Softfakes
Softfakes are subtler changes to existing content that are less obvious. This includes slight adjustments to facial expressions, gestures or context to change the perception of an event or person.

Mass Fake Advertising
This involves the spread of false or misleading information via social media and other platforms to push a particular agenda or damage the reputation of an individual or organization.

Steps to Identify Misinformation


Check the source
Check whether the information comes from a trustworthy source. Reputable news portals and official organizations are often more credible than unknown websites. Look at the web address and look out for suspicious URLs that are similar to real websites but slightly different.

Pay attention to the date
Outdated or out-of-context messages can be misleading. Check the publication date to ensure the information is current.

Question Sensational Headlines
Sensational headlines are often designed to evoke emotions and encourage clicks. Read the entire article before forming an opinion.

Review image and video editing
Look for evidence of editing, such as unnatural movements or strange shadows. In videos, discrepancies between sound and image are often a sign of manipulation.

Analyze context and content
Check whether the content is presented in an appropriate context. Sometimes quotes or statements are taken out of context to promote a particular narrative.

Clues for recognizing AI-generated images


Errors in the background
AI systems are usually very good at generating human faces and bodies. However, they are often not that good at making the background realistic. Therefore, background errors can be a sign of an AI-generated image.

Unnatural shadow
Another sign of an AI-generated image can be an unnatural shadow. While AI systems are capable of generating shadows, they often cannot account for the subtleties of light and shadow that occur in the real world.

Repetitive Patterns
AI systems are often trained with large amounts of data. Therefore, they may generate repeating patterns in images. So if you see an image that has an unusually high number of repeating patterns, it could be an AI-generated image.

Incorrect proportions
AI systems can have difficulty correctly calculating the proportions of objects in images. So if you see an image where the proportions of the items look strange, it could be an AI-generated image.

No attribution
If you see an image that has no attribution, it could be an AI-generated image. Because it is difficult to trace the origin of AI-generated images, it is often difficult to provide attribution.

Practical tips for protecting yourself from AI deepfakes


Use deepfake detection software
There are tools and software that can help identify deepfakes by pointing out signs of manipulation. Examples include Deepware Scanner and Sensitivity. These programs analyze videos for anomalies and changes that may indicate a fake.

Watch for subtle clues
Unnatural eye movements, uneven skin tones, or out-of-sync lip movements can indicate deepfakes. Also note whether the sound is even and the background noise is consistent.

Check the content across multiple sources
If you come across a suspicious video, look for coverage of it in other, trusted media outlets. Often, deepfakes cannot match reporting in reputable news sources.

Become aware of technical terms
Understand the basics of how deepfakes are created and what software is used to create them. A basic understanding can help you better recognize the signs.

Be particularly wary of controversial content
. Content that is highly emotional or controversial is often a target for deepfakes. View such videos particularly critically and question their authenticity.

Practical tips to protect yourself from fake news


Use Google Reverse Image Search
This feature allows you to check the origin of an image and see if it has already appeared elsewhere on the Internet. Simply upload the image to Google to see where else it is being used.

Watch out for unnatural details
In images and videos, unnatural details such as strange movements or erroneous shadows can indicate tampering. Review content carefully and compare it to other trusted sources.

Develop critical thinking
Ask yourself if the information you see is plausible. Check whether it is consistent with other known facts and whether there is an apparent purpose why someone might manipulate this information.

Discuss with friends and family
Talk about suspicious content with others. Sometimes a different perspective can help detect manipulation. Share your insights and promote critical discourse.

Promoting media literacy


Education and awareness
Invest time in educating yourself and others about the risks of AI manipulation and the importance of media literacy. Share trustworthy information and help others spot fakes. NOTE: We offer a variety of courses and workshops to improve media literacy mimikama.education These courses teach you how to critically question news and recognize fakes.

Encourage Critical Thinking
Be skeptical of information that provokes strong emotional reactions. Consider whether the information is plausible and consistent with known facts.

Active Participation in Discourse
Discuss the issues that concern you and question the information presented to you. Share your insights with others and encourage healthy dialogue.


Mimikama note: Disinformation can also be done without AI

While AI deepfakes and manipulative technologies are the focus of public discussion, we should not forget that disinformation can also be spread in traditional ways - without the use of AI. Fake news, misleading headlines and deliberate manipulation of facts continue to be common ways to influence public opinion.

Traditional methods of disinformation:

  • Clickbait headlines: Sensitive headlines, which often do not reflect the true content of an article, are used to attract attention and generate clicks. Readers should always read the entire article and not just rely on the headline.
  • Half-truths: Partially accurate information that is intentionally taken out of context or presented incompletely to promote a particular narrative.
  • Rumors and speculation: The spread of unconfirmed information and speculation can quickly lead to misunderstandings and false beliefs.
  • Manipulated Statistics: Selectively presenting or distorting statistics can give the appearance of legitimacy while in reality they are misleading.

What can you do?

  1. Check facts: Don’t rely on the first source you come across. Look for multiple sources that confirm the same information.
  2. Question the intent: Consider who might benefit from disseminating certain information and whether there is an agenda that is being promoted.
  3. Building media literacy: Stay vigilant and continually educate yourself to recognize and question the techniques of traditional disinformation.
  4. Choose trustworthy sources: Consume news from established and reputable media outlets known for their thorough research and fact-checking.

By remaining aware of the danger posed by traditional disinformation and not just focusing on AI-generated content, we can be better prepared to distinguish truth from lies in an increasingly complex information landscape.


Conclusion : The election campaign in Austria is facing a major challenge. The spread of AI deepfakes and mass fake advertising threatens not only the integrity of elections, but also citizens' trust in the democratic process. It is essential that both voters and platforms remain vigilant and actively combat the spread of misinformation. This is the only way to ensure that the election in autumn 2024 will be fair and transparent.

In line with this topic:


If you enjoyed this post and value the importance of well-founded information, become part of the exclusive Mimikama Club! Support our work and help us promote awareness and combat misinformation. As a club member you receive:

📬 Special Weekly Newsletter: Get exclusive content straight to your inbox.
🎥 Exclusive video* “Fact Checker Basic Course”: Learn from Andre Wolf how to recognize and combat misinformation.
📅 Early access to in-depth articles and fact checks: always be one step ahead.
📄 Bonus articles, just for you: Discover content you won't find anywhere else.
📝 Participation in webinars and workshops : Join us live or watch the recordings.
✔️ Quality exchange: Discuss safely in our comment function without trolls and bots.

Join us and become part of a community that stands for truth and clarity. Together we can make the world a little better!

* In this special course, Andre Wolf will teach you how to recognize and effectively combat misinformation. After completing the video, you have the opportunity to join our research team and actively participate in the education - an opportunity that is exclusively reserved for our club members!


Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )