Collaborative Defense: Tackling Disinformation against the EU Elections

6 Jun 2024 | Reports

What is the Counter Disinformation Network?

The Counter Disinformation Network started as an initiative to create a collaboration between European organisations and independent researchers to monitor attacks against the EU elections. Participants range from civil society organisations, universities, think tanks, journalists, independent researchers and fact-checkers. Our community now consists of more than 35 organisations and 100 individuals, broadening it’s focus to work beyond the elections.
Description above updated 31/07/2024.

Overview of Cases

Since the 27th of May, 6 pre-election cases have been submitted to the Network by its participating organisations. Some of these cases are long investigations that have progressed since before the Network, while others started through collaborations fostered through the Network.

Case 1 – Facebook Page Operated from Benin Attacking Macron and Ukraine Using Ads

The first case of the Network was a collaboration between the insightful independent researcher Kristina Gildejeva and Alliance4Europe’s Saman Nazari.

The investigation showed that an alleged French nationalist Facebook page operated by three accounts based in the West African country Benin were running ads targeting France with anti-Macron and Ukraine content.

The page has since August 2023 run 90 ads that hide their funding source, lacking correct political labelling. 40 out of the 90 ads used obfuscation methods to hide from automated content moderation such as hidden characters and words.

At least two ads accused Ukraine of being behind the terrorist attack on Crocus City Hall, one of which also accuses the United States. Another ad promoted a video impersonating Euronews.

Here is a twist, the ads are paid with Canadian dollars, giving us a further indication that the page is an influence operation by a foreign actor.

While we flagged the case to Meta, the page remains active.

Read the full investigation here.

Case 2 – Pro-Russian Ads Campaigns Approved by Meta

The second case is a report by the sharp researchers Paul Bouchaud at AI Forensics and Amaury Lesplingart at CheckFirst.

Their report delves into pro-Russian ad campaigns on Meta’s platforms targeting Italy, Germany, France and Poland between May 1 and 27, 2024. During this period, Meta approved at least 275 pro-Russian ads on their platforms, missing the political disclaimers. The ads bypassed Meta’s moderation system by hiding characters and word obfuscation, similar to the first case.

The ads reached 3 075 063 accounts in Italy, France, Germany and Poland.

The researchers state that Meta does not adequately prevent the misuse of its advertising systems and that there was a notable increase in the reach of pro-Russian ads in recent weeks, targeting new countries.

Read the full report here.

The second case demonstrates that the first case is not a one-off occurrence, but potentially a part of a larger systemic issue on Meta’s platforms. For the same reason, the European Commission opened formal proceedings to see if Meta has breached the Digital Services Act (DSA) in regards to preventing disinformation on their platform. In this opening, the lack of moderation of deceptive political advertising, and disinformation campaigns was quoted.

Case 3 – Pro-Russian Disinformation Campaign “Operation Overload” Targeting Fact-Checkers and Media

The third case is a report by the ever-inspiring Aleksandra A. at Reset.tech and the excellent researchers Amaury L. and Guillaume K, both at CheckFirst.

It covers a pro-Russian disinformation campaign “Operation Overload”.

The campaign targets fact-checkers, newsrooms, and researchers worldwide, aiming to deplete their resources and encourage them to amplify false narratives. The actors operate through a coordinated email campaign, an ecosystem of popular Russia-aligned websites, and networks of Telegram channels and inauthentic accounts on X.

A prominent tactic involves flooding media organisations with emails containing links to fake content and anti-Ukraine narratives, particularly targeting France and Germany.

The actors use a tactic we call content amalgamation, which involves merging multiple content types and audiovisual formats to create a multi-layered, coherent story. This story is then strategically amplified across different platforms, creating a false sense of urgency among journalists and fact-checkers while lending credibility to an entirely fabricated reality.

Pro-Russian Telegram channels initially seed the content, which is then amplified on other Russian platforms (VKontakte, forums, websites) and further disseminated to broader audiences.

This disinformation campaign is one of the most advanced cases we have seen, showing how pro-Russian actors are developing their methods.

Read the full report here.

Case 4 – Salvini’s Electoral Campaign Uses Non-watermarked AI Images

The fourth case was produced by one of Alliance4Europe’s volunteers who wished to be Anonymous. The case investigated the use of non-labelled AI-generated images employed by Matteo Salvini (MEP candidate, head of the Italian “Lega” party and Italian Minister for Transport and Infrastructure) in his election campaign.

Salvini posted 6 AI-generated, non-watermarked images on X and Facebook. One of these ads reached 2 million users. The posts are part of Salvini’s “More Italy, less Europe” Italian electoral campaign and promote the Great Replacement theory, claims that the EU has untransparent governance, the EU is against Italian food products and farmers, and that the EU is against the traditional family.

Using non-watermarked AI images is against commitment 3.b of the 2024 European Parliament Elections Code of Conduct, a voluntary, non-binding agreement signed by all European Groups, including ID (Lega’s EP group). The Code applies only to European political parties and signatory member parties, which do not include Lega.

However, it is important to flag when national parties fall below the transparency commitments of their own group. This should provide evidence of why it is important to foresee stronger oversight or mandatory transparency measures for national parties in future EU elections.

Further research on the topic of the use of un-labelled AI-generated images by EU elections is ongoing by several organisations in the Network.

Read the full case here.

Case 5 – Sanctioned Russian Media Entities and Individuals Accessible on TikTok

The Network does not only consist of researchers but also academics. Case five was written by Dr. Rikard Friberg von Sydow at Södertörn University, Saman Nazari at Alliance4Europe, and researchers at Science Feedback.

The investigation shows how channels of sanctioned Russian media entities and individuals are still accessible on TikTok.

As of June 4th 2024, at least 29 TikTok channels of Russian media entities banned by the EU or posing as such, are accessible to EU audiences, including the official Spanish-language account of RT with 2.9 million followers. A network of amplifier accounts frequently or systematically reposts these channels’ content.

Council Regulation No 269/2014 prohibits hosting content from these channels or making them available to EU audiences on video-sharing platforms, as clarified in Commission guidance updated mid-May 2024.

Hours after the report was released, TikTok took action and made all the content of the pages listed not visible in Europe. Other pages we found after publishing the report were still accessible in Europe. It seems TikTok took a reactive action rather than trying to systematically face the issue.

Read the full report here.

Case 6 – Russian Cyberattack on the Polish Press Agency

The Polish members of the Network are one of the largest groups represented in the Network.

Both Gazeta Wyborcza and Info Ops Polska Foundations investigated the Russian cyberattack on the Polish Press Agency (PAP) and the subsequent disinformation campaign. Dominik Uhlig at Gazeta Wyborcza interviewed Kamil Basaj from the Info Ops Polska Foundation about the topic.

The interview highlighted a cyberattack on the Polish Press Agency (PAP) that involved publishing false information claiming a planned mobilization of Polish troops to send to Ukraine. This false information was intended to shape a narrative portraying Poland as an aggressor, thereby legitimizing Russian actions and discrediting Western and Ukrainian efforts in the conflict. The interview discussed how the Russian propaganda apparatus uses psychological manipulation, selective information, and the fabrication of evidence to influence public perception and create a misleading narrative. The aim was to weaken the West, polarize societies, and promote pro-Russian sentiments by intimidating and confusing target audiences, especially in the context of upcoming elections.

Read the report here and the interview here.

Overall Observations:
Diverse Investigative Collaboration:

Each of these cases involves different researchers and organisations collaborating, showing the importance of collaboration.

Foreign Influence Operations:

Multiple cases highlight attempts by foreign actors, particularly Russian and pro-Russian entities, to influence public opinion in various European countries through social media platforms. This could be caused by the organisations in the collaboration focusing more on foreign than domestic actors.

Tactics of Disinformation:

The use of obfuscation techniques (hidden characters, word obfuscation) to bypass automated content moderation is a recurring tactic. Additionally, AI-generated images and impersonation or takeover of credible news sources are employed to mislead audiences. The dissemination methods are sophisticated, involving coordinated networks of inauthentic pages and email campaigns.

Regulatory and Compliance Challenges:

The cases underscore platforms failing to adhere to the Digital Services Act (DSA), sanctions against Russia, and the European Parliament Elections Code of Conduct. Political ads are not well regulated by Meta, which is allowing foreign actors to potentially interfere in the democratic conversations around the elections. This highlights the need for stronger oversight and mandatory transparency measures for national and European political entities.

Narratives targeting the Ruling Parties

The disinformation campaigns are targeting fault lines in society to manipulate voters for their own gain. They rile up anti-political establishment sentiment and try to diminish support for Ukraine. On a narrative side, we are seeing many of the same things as we have seen in the past.

Recommendations

Check First and Reset.tech provided excellent recommendations on how to approach Operation Overload as a journalist, fact-checker, or organisation:

Stay vigilant: Be cautious of unsolicited emails or direct messages, especially those mentioning politically sensitive topics like Ukraine and Russia. Verify the sender’s identity before engaging.

Coordinate with others: Record and share suspicious emails and DMs with other fact-checkers and journalists to identify patterns and coordinate responses.

Use organizational resources: Report suspicious communications to the organization’s IT department or relevant authorities. Encourage cybersecurity training for staff.

Prepare for content amalgamation tactics: Train teams to recognize and identify manipulated content combined to create false narratives.

When it comes to the malicious and intransparent use of political ads and AI, we recommend reporting on it yourself or sending it to organisations such as the ones in the Network.

Partners

Alliance4Europe, Check First, Science Feedback, ISD, Info Ops PolskaGLOBSEC, DISARM Foundation, Fakenews.pl, Clash Digital, CEE Digital Democracy Watch, Political Accountability Foundation, Logically Facts and more.

 

Author: Saman Nazari, Alliance4Europe.
Covre photo: Claudia De Sessa, Independent.