At a time when trust in online content is shaky, an NGO called Ekō has uncovered an alarming vulnerability in giant Meta's advertising system. The case raises questions about the safety and integrity of social media advertising and shows that even a giant like Meta is vulnerable to being overlooked.

Ads calling for the execution of a politician or calling for the burning of synagogues are deeply contrary to human values ​​and also violate Meta's clear advertising guidelines . Despite this, some such ads have recently received Meta's approval. According to the NGO Ekō, Meta's content moderation failed in this regard. Of the 13 ads Ekō placed, eight were approved - including ones that described the Spanish elections as 'rigged'.

Meta's Advertising Gate: Ekō's Revealing Experiment

The non-governmental organization Ekō decided to take a bold step: it bought ads on Meta that clearly violated the company's anti-hate speech policies. Surprisingly enough, many of these ads were approved.

In the context of the latest provisions of the Digital Services Act, which Meta is expected to comply with from August 25, 2023, Ekō calls for increased security measures when running advertisements. Ekō emphasizes that it is currently too easy to spread hate speech via meta ads.

During this revelation, not only a worrying gap in Meta's advertising moderation emerged, but also the urgent need to rethink how online advertising is regulated and monitored.

MIMIKAMA
Meta accepts advertising full of hatred and calls for violence in Europe

Ekō planned to run 13 ads in Europe through Meta, all of which contained AI-generated graphics and text that clearly violated Meta's policies. Eight of these advertisements were approved. The ads that Meta blocked were flagged and therefore not aired because of their political nature - not specifically because of their strongly anti-people messages. Ekō withdrew the ads before they could air. Meta commented on the organization's findings in a statement:

This report was based on a very small sample of ads and is not representative of the number of ads we review daily across the world. Our ads review process has several layers of analysis and detection, both before and after an ad goes live. We're taking extensive steps in response to the DSA and continue to invest significant resources to protect elections and guard against hate speech as well as against violence and incitement.

Means:

The report is based on a very small sample of advertisements and is not representative of the number of advertisements we review worldwide each day. Our ad review process includes multiple levels of analysis and detection, both before and after an ad is published. We are taking comprehensive action under the DSA and continue to invest significant resources in protecting elections and combating hate speech, violence and incitement to violence.

Is Meta ready for DSA?

While Meta has already taken steps to meet the requirements of the DSA, it is clear that more needs to be done. The company needs to implement a more robust content moderation strategy to ensure that harmful content is not exposed. Additionally, greater transparency and accountability are needed to regain public trust.

Meta must therefore devote more resources to the fight against hate comments and risky advertising in the future, as stated in the company statement. If a company or group violates the Digital Services Act (DSA), they face penalties of up to six percent of their global annual turnover.

MIMIKAMA
Screenshot: Questions and Answers: Digital Services Act

The Commission will have the same surveillance powers as under current antitrust rules, including investigative powers and the ability to impose fines of up to 6% of global turnover.

Why it matters

It is no exaggeration to say that the Ekō case is a wake-up call for the entire advertising and social media industry. At a time when fake news and hate speech poison the online environment, we do not have the luxury of underestimating the importance of safe and responsible advertising.


When the DSA came into force, Meta allowed a series of violent, racist, anti-Semitic and “Stop the Steal” ads targeting Europeans. ONE OF THE ADVERTS CALLS FOR THE EXECUTION OF A PROMINENT MEP BECAUSE OF HER STANCE ON IMMIGRATION. IN THIS WAY META'S HARMFUL CONTENT COULD BE FIGHTED.

The alarming findings come a day before the EU's Digital Services Act (DSA) comes into force. If implemented properly, it will target Big Tech's core business that accelerates the spread of hate speech and misinformation. In an experiment conducted between August 4 and 8, Facebook allowed a series of eight highly inflammatory ads calling for a violent "Stop the Steal"-style uprising to overturn the results of Spain's recent election , as well as racist and anti-Semitic slurs and calls for violence against immigrants and the LGBTQ+ community.

Each ad copy was accompanied by manipulated images created using AI-powered image editing tools, showing how quickly and easily this new technology can be used to amplify harmful content. New research from corporate responsibility group Ekō in collaboration with the People Vs. Big Tech network shows that Meta is still unable to detect and block ads that contain hate speech, information about election fraud and calls for violence - including one Death threat against a sitting member of the European Parliament.

META ADS MONETATE CONTENT THAT CALLS FOR EXECUTIONS, GENOCIDE AND “STOP THE THEFT.”

Several ads played on fears of immigrants flooding Europe and linked immigration to alleged violent crime. An ad aimed at a German audience called for burning synagogues to “protect white Germans.” Two ads promoted a "stop the steal" narrative surrounding Spain's recent elections, claiming that electronic voting machines had been tampered with and calling for a violent uprising to assassinate political opponents and overturn the election results. A targeted ad in Romania called for the “cleansing” of all LGBTQ+ people. An advert called for the execution of a prominent MEP over her stance on immigration.

Each ad was accompanied by a manipulated image created using AI image processing tools Stable Diffusion and Dall-e2. For example, Ekō researchers were able to easily create images that showed a masked person throwing ballots into a ballot box, drone footage of immigrants crowding into ports and border crossings, and synagogues burning. In total, 8 out of 13 ads were approved by Meta within 24 hours; all approved ads violated Meta's own policies. Five ads were rejected because they referred to elections or politicians and were therefore political ads.

All ads were removed by the researchers before publication so that they were never seen by Facebook users. The ads were published in German, French, English and Spanish. The researchers removed the ads before publication so they were never seen by Facebook users. Five additional ads were submitted and rejected because they could be classified as political ads, but they were not rejected for hate speech or incitement to violence, again demonstrating the inability of Meta's automated systems to detect harmful content.

The results of this experiment are alarming and show that despite efforts by major tech companies and lawmakers to curb the spread of hate and misinformation, significant gaps remain. It is essential that these companies are more regulated and take their responsibilities seriously to prevent the spread of hate and misinformation.


Conclusion: The case of Ekō vs. Meta shows that even the largest and most advanced companies in the world are not immune to mistakes. It also highlights the need for stricter regulation and self-regulation in the online advertising industry. Hopefully, such revelations will serve as a catalyst for positive change and not just another scandal in the annals of internet history.

At the end of the day, we are all users and it is within our power to put pressure on these platforms to ensure they operate more safely and responsibly. Let's use this case as a reminder of the importance of staying informed and doing our part.

Stay informed and engaged. Sign up now for the Mimikama newsletter and take advantage of the Mimikama media education offer . Our future in the digital space depends on our collective vigilance and actions.

Sources:


If you enjoyed this post and value the importance of well-founded information, become part of the exclusive Mimikama Club! Support our work and help us promote awareness and combat misinformation. As a club member you receive:

📬 Special Weekly Newsletter: Get exclusive content straight to your inbox.
🎥 Exclusive video* “Fact Checker Basic Course”: Learn from Andre Wolf how to recognize and combat misinformation.
📅 Early access to in-depth articles and fact checks: always be one step ahead.
📄 Bonus articles, just for you: Discover content you won't find anywhere else.
📝 Participation in webinars and workshops : Join us live or watch the recordings.
✔️ Quality exchange: Discuss safely in our comment function without trolls and bots.

Join us and become part of a community that stands for truth and clarity. Together we can make the world a little better!

* In this special course, Andre Wolf will teach you how to recognize and effectively combat misinformation. After completing the video, you have the opportunity to join our research team and actively participate in the education - an opportunity that is exclusively reserved for our club members!


Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )