Want to support our work to protect the European information space?
Authors:
Saman Nazari – Lead Researcher at Alliance4Europe, Counter Disinformation Network Coordinator.
Claudia De Sessa – volunteer with Alliance4Europe – The information and views set out in this article are those of the author and do not reflect the official opinion nor are linked to the activities of her official employer.
Joel Haglund – volunteer with Alliance4Europe – The information and views set out in this article are those of the author and do not reflect the official opinion nor are linked to the activities of his official employer.
Introduction
During an ongoing investigation into illegal Russian influence operations on X, Open Source Intelligence (OSINT) researchers discovered what seems to be a coordinated inauthentic behaviour (CIB) network distributing “Child Sexual Assault Materials” (CSAM). The network can be observed hijacking hashtags, publishing explicit CSAM videos, and redirecting users to a wide range of other platforms. Due to the operation flooding hashtags, researchers chose to name it “Operation X-ploitation.” As the lead researcher of this case is based in Belgium, it was reported to the Belgian police and NGO ChildFocus on 18 July.
A CIB network can be understood as a series of online accounts displaying similar patterns of behaviour, often aimed at amplifying content or manipulating audiences.
In this case, the CIB network is centred around two types of accounts:
1. accounts posting CSAM content,
2. accounts amplifying posts with CSAM content.
Because of the intense use of bots to comment and view the videos, it is difficult to estimate how many genuine views the videos got. However, numerous posts have more than 20 or 30 thousand views and more than 200 inauthentic comments.
This investigation took place between 18 July and 22 July 2025. However, researchers found evidence that the operation had been ongoing since at least May 17. As of 28 July, the operation seems to have been mitigated by X, age-restricting and suspending accounts at a quicker rate and regularly clearing out hashtags. However, this has not fully stopped the operation. New accounts keep being created and share CSAM content, albeit at a smaller scale and with less visibility.
Since the beginning of the investigation, the operation seemed to have been partially mitigated by X. While some action had been taken to suspend the accounts involved in the operation, the operation became operational again within a week. As the account of the researcher became age verified, the CSAM content became available again. This means that the underlying issues – namely, the ease of account creation and non-systematic moderation – have not been adequately addressed.
This report outlines the patterns used by this network, explores the amplification patterns, and shows how the operation is exploiting some of the same vulnerabilities that influence operations, such as Doppelganger.
We outline how this systemic risk, exploited by both CSAM and Russian CIB networks, might stipulate a violation of the Digital Services Act Articles 34 and 35, as they have remained unaddressed despite previous flagging of the issue by the research community.
Background
In 2022, the social media platform Twitter was purchased by Elon Musk, who proceeded to rename it X. Musk’s takeover entailed many things beyond a simple name change. He fired a large portion of the staff, including the content moderators and existing trust and safety personnel who worked to keep the platform safe.
For the Human Rights and Counter Disinformation communities, and of course X users in general, this had serious consequences. The corporate change in direction was followed by a surge in influence operations using the new vulnerabilities that appeared on X. For example, one such vulnerability is the ease of creating new accounts. In an experiment, we have shown that it only takes a human 1.5 minutes to do so using a temporary email account, available online for all to use. There are very few and inadequate safeguards to protect against mass-account creation, which should be addressed to prevent these types of operations from working.
Another vulnerability that became apparent is significantly worse content moderation, and a significant increase in content being reported. As a consequence, the European Commission has started formal proceedings against X to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management and content moderation.
X’s lack of effective and foolproof systemic response might provide further evidence that X’s content moderation system is lacking, as we have seen with the Russian influence operations Doppelganger and Operation Overload, further proving the existence of a systemic risk.
Coordinated Inauthentic Behaviour (CIB) Analysis
More than 150 accounts sharing CSAM videos were identified during a three-day investigation, with new accounts continuously being created during the investigation. The goal of the CIB appears to be tied to monetary gain, either through selling CSAM or linking to possible scams.
A number of key takeaways emerge:
- The accounts show clear signs of being automated, and all display nigh-identical behavioural patterns. However, while they all utilise more or less the same hashtags and behaviours, the content and destination they try to direct users to seem to differ.
- Some accounts seem hacked, belonging to inactive users who used to post regular content. Others seem to be created solely for the purpose of this operation, and some seem to have been repurposed spam accounts.
- The geographical origin of the accounts seems to differ, with accounts seemingly having names from different geographical regions. In particular, a number of accounts displayed Vietnamese-sounding names, as well as generally Anglophone-sounding names.
- The accounts employ a set series of tactics, techniques and procedures (TTPs). In particular, the CIB network continuously floods a fixed (or semi-fixed) set of hashtags with new accounts spamming CSAM. Then, the amplifier accounts comment on the videos with automated replies, boosting the content in the feed. Hashtags are used as an aggregator of CSAM, making it easy to discover other flooded hashtags and new CSAM accounts.
- The accounts are sharing links that lead to a series of different online chat spaces, like Telegram and Discord, but also to dedicated, separate websites. Therefore, we suspect that the accounts might be linked to a network which services a number of different clients simultaneously, or that different networks are simply deploying similar techniques to spread their content.
These factors in combination imply the likelihood that this network is purchasing accounts from different sources or that the network may be repurposing previously used spam accounts.
Tactics Used
The network’s or networks’ posts have a few different components:
- Hashtags
- Text
- Links
- Video content
- Comments.
Hashtag flooding – dominating a series of set hashtags
The public version of this article does not include the hashtags to prevent the exposure of people to CSAM.
The accounts appear to be using the same series of hashtags. Some of them flood R-rated (but ostensibly 18+) hashtags with CSAM content, while others seem only dedicated to CSAM.
A continuous flood of accounts uses 19 different hashtags, mostly related to words like “mom”. “teenagers”, words related to incest, and hashtags related to pornography.
The use of hashtags related to teenagers and moms raises additional concerns, as they are common hashtags that may be innocently browsed and expose both minors and adults to CSAM.
During this investigation, a specific set of hashtags related to words commonly associated with CSAM were particularly indicative in identifying the network, as they were constantly flooded with CSAM without much other content being shared.
Additionally, related hashtags were recommended by the X search bar. For example, when searching for one of the hashtags, X suggests other hashtags promoted by the network. It seems to be general porn related hashtags.

Image 1: X search bar suggesting CSAM-related hashtags
While accounts are taken down, new ones have consistently kept appearing under the same tags. During the investigation, more than 150 of these accounts were identified, and while the vast majority of them are suspended in a matter of hours, the flood of CSAM content does not stop. For example, under one of the hashtags, CSAM content appears to be posted every 1 to 10 minutes. This happened consistently over three days.

Image 2: frequency of posting of CSAM
Text – repeated
There are a few different variations of the text used by the accounts. These variations have different components that are combined into posts.
- A repeating component is “You know what to do – Like, RT & Follow!” and BEBI👀 You know what to do – Like, RT & Follow! 😎” combined with different hashtags.
- Another uses a random string of text followed by “ideas” and an emoji.
- A third component is “Seen This” followed by emojis.
- Another common component is “Check This” followed by emojis.
- Lastly, almost all posts have a string of random numbers and/or letters attached to them.

Image 3: Example of CSAM posts with standardised text
All the posts are trying to get users to a third-party website or communication platforms.
Links
The accounts use videos to entice the audience, to then redirect them to external links.
These links redirect to:
- Websites selling folders of CSAM.
- Telegram accounts and groups, and Discord groups, to access CSAM.
- (Possibly) scam tools to “hack” Snapchat accounts
- Dating websites
- Pages to download tools that seem to be private/secured chat apps to access communities for sharing CSAM.
Some of these links are similar to the Russian influence operation “Doppelganger”, which characteristically obfuscates their links’ end destinations through redirections, while others are not. Despite the X accounts sharing these links being taken down, X is not blocking the links, which makes it easy for operators to continue spreading the content through other throwaway accounts.
Looking at one of the amplified pages, we can see that they are trying to sell access to bundles of CSAM content. One of these, depicted below, is selling different types of subscription plans.

Image 4: Destination of one of the links advertised in the CSAM posts, leading to a real CSAM purchasing platform.
Investigating the payment options shows us how customers can pay via PayPal and Bitcoin.

Image 5: payment options on CSAM platform
While we cannot extract any more information from the PayPal link, the Bitcoin option provides a Bitcoin wallet address

Image 6: crypto wallet of the CSAM platform, showcasing that it is active.
Since July 5th, the wallet has received $660 through 23 transactions, potentially confirming that the operation is reaching people who are buying access to the content.

Image 7: crypto wallet of CSAM platform showcasing previous transactions worth $660.
The hosting of the website is hidden behind Cloudflare, making it impossible to track the hosting provider or owner at this stage.
It needs to be further investigated if the external pages are somehow related to each other, and if they are somehow “browseable” to someone who would know what to look for. For example, membership in the private chat could be a prerequisite to downloading content from the Telegram groups. Disclaimer: This type of interconnectivity has not been verified, and that part of the investigation has been left to law enforcement.
Video content
The accounts often share the same videos within a set period. In other words, waves of different videos seem to appear at the same time.
The videos depict children, ranging from toddlers to teenagers, who are sexually assaulted, raped, or are otherwise explicitly exposed. Some of the videos have branding, relating to the link they are trying to amplify. Some of the videos have comments under them that also depict lists of videos.
Comments
A few seconds after the original post is published by one account, a series of other accounts start flooding the post with comments. The comments contain either links to CSAM selling websites or Telegram channels, or a text encouraging users to check their profile bio for “teen content” using a special font.

Image 8: spam accounts amplifying CSAM accounts via automated commenting
While the comments are automatically flagged as spam, the posts are still appearing as “top” content of the hashtags they target, seemingly through this comment flooding.
Accounts
While there seem to be empty accounts used specifically for this purpose, there are also spam accounts and hacked accounts that seem to have been put to use.
Spam accounts
Some accounts seem to be generic spam accounts, with content such as crypto ads and then CSAM content. This suggests that these spam accounts may be operated as a for-profit service where people can pay to have them upload unmoderated content. Should this possibility be confirmed, it would represent a great and unmitigated systemic risk.
Some other spam accounts post random things, with captions likely autogenerated.


Image 9-10: on the left, a CSAM account. On the right, the same account posting a series of autogenerated quotes, in a format common to several spam accounts.
Observed spam accounts were often created in the last year, from March to June 2025, and they tend to have limited amounts of followers (observed, 0 to 32) and accounts they follow (observed, from 0 to 35).
We observed a series of spam accounts with Western-sounding names, such as Linda Johns, Maria Green and Elizabeth Brown. They have very few followers and follow very few accounts (between 0 to 5). The “Western” accounts have links that redirect towards a page inviting people to join a CSAM Discord server. The Discord links from the “Western” accounts lead to websites that contain malicious code.
However, other accounts with names of no clear origin seemingly lead to real Discord servers. One of those utilises an extremely explicit name.
Another series of spam accounts has Vietnamese names. They have slightly higher numbers of followers (1 to 30) and followed accounts (13 to 35). The Viet accounts provide links to different pages: CPteen.pages .dev, a real, active CSAM website; various CSAM-related Telegram channels such as “Dirtybox”, “Flamefolder”, “XStore” and “Snapchat hack”.
Hacked accounts
Several accounts seem to have been inactive for a long time, showcasing authentic behaviours, before being used to disseminate, sometimes after years of inactivity, CSAM.
Pornographic posts are usually shared in close succession between one another. Concerningly, some hacked accounts maintain profile pictures of real people.

Image 11: hacked account posting CSAM
Accounts following accounts
A lot of the accounts, especially those with profile pictures of Asian girls, follow other similar spam accounts. The profile of “Dang” below exemplifies this pattern, which is widespread.
Both the account that posts CSAM content and those followed by it post replies that look like quotes, followed by a random name. Notably, accounts followed by the pornographic one do not post CSAM content.

Image 12-13: on the left, a CSAM account and the list of followed accounts. On the right, those followed accounts posted similarly autogenerated quotes.
X Response to the Operation – an update after 22 July
This investigation took place between 18 July and 22 July 2025. However, researchers found evidence that the operation had been ongoing since at least May 17. Across the whole period, while no definitive number can be estimated, the number of posts seemed to be in the millions, and the operation continued largely undisturbed.
Since July 28, X started removing individual pieces of content more rapidly. However, while this decreases the intensity of the operation, it did not effectively halt the operation’s activities. On July 29th, it could also be observed that the age verification system of X also started blocking users from being able to access the content, leading to the commenting accounts of the CIB network often being unable to amplify the content. This, in turn, led to the hashtag hijacking being hampered, but not decisively stopped.
Indeed, as of 29 July, X’s mitigating measures achieved mixed results:
- A slowing down, but not the end of the operation. Fewer posts appear on the “latest” feed, and especially on the “top” feed. However, 1 to 2 graphic posts continue to appear.
- Posts remained online for shorter amounts of time and are restricted on the basis of the X age assurance policy, before being taken down completely.
- Hashtags were wiped on a regular basis, but posts continue to appear.
The operation itself was not stopped, precisely because the key vulnerability of the platform, meaning the ease of creating fabricated accounts and inconsistent moderation, have not been addressed. Indeed, new accounts keep appearing, still with graphic content. In particular:
- New accounts continued to be created and amplified, albeit not at the same scale.

Image 14: accounts still being created on 29 July
- The same hashtags continued to be used, and hashtags continued to be recommended. While the hashtags were wiped regularly, as accounts are so easily created, posts continued to appear regularly under the known hashtags.


Images 15 and 16: example of a hashtag still in use one hashtag. On the right, another hashtag still suggests CSAM-related hashtags. Screenshots recorded on 29 July.
- Posts did not appear with the same frequency on the “top” section, likely due to the decreased take-down time not allowing amplification by automated commenting. However, since the key vulnerabilities themselves were not addressed, posts continued to be published in the “latest section”, and some posts still made it to the “top” feed.

Image 17: CSAM posts still making it to the top feed, on 29 July, and several posts appearing in the “latest” feed.
- Amplification through automated commenting continued, albeit on a smaller scale. For example, the post in image X had 174 likes, no comments, no re-tweets and 26 views after 1 minute. After 6 minutes, the post obtained 59 comments, 4 retweets, 191 likes and 1.6k views. All the comments were spam.

Image 18: CSAM posts still being amplified.
It initially seemed like X was addressing the CSAM operation (decreased take-down time, regular wiping of hashtags would suggest so). However, as the researchers’ account got age-verified, the CSAM content became available again, and the operation came back in full swing. It seems like part of the countermeasures undertaken by X seems to be linked to the new age restriction policy.

Image 19: CSAM posts temporarily restricted on the basis of age restrictions.
Indeed, X’s approach seems to have been to implement its age assurance policy to shield sensitive content from accounts under 18. This is a departure from the platform’s original response to the operation, where X suspended accounts, likely on the basis of platform rules violations.
The implementation of the age assurance policy foresees that X will verify the user’s age via a set of automated and proactive measures.
This age verification measure seems to be a reaction to the very new need to comply with the new Irish Online Safety Code and the UK Online Safety Act, which include provisions on age assurance to keep minors away from harmful content, including pornographic and violent content. On 24 July, indeed, the Irish media regulator clamped down on X regarding this matter. In this sense, the measures are not CSAM specific, nor do they systematically tackle the underlying technical issue, dealing with the blocking of URLs, hashtags and ease of creating fabricated accounts.
Update after 06 August
The researchers could verify that age-verified accounts may still access the content.
The operation continues to operate in a similar manner as it did during the initial investigation, reaching tens of thousands of users.

Image 20: CSAM content still available for age-verified profiles
The operation itself, the underlying issue of easy account creation, inconsistent content moderation do not seem to have been addressed, and content remains easily available.
Systemic risks
Similarities to Doppelganger and Operation Overload
The Russian influence operations Doppelganger and Operation Overload use one X account to post their content, and then others to amplify it inauthentically. This is made possible through the ease of creating disposable X accounts, only taking 1.5 minutes to create one manually, and likely requiring mere seconds to do so automatically.
By not having good safeguards in place during the account creation process, only requiring an email address to create an account, threat actors can easily create an infinite number of disposable accounts for their illegal actions.
X needs to put in place safeguards against the bulk creation of accounts, like phone number verification, blacklisting email addresses used by temporary email services, tracking and blocking IP addresses creating several accounts within a short period of time, and other patterns of behaviour that X can detect through their logs.
Similar to the Doppelganger operation, the CSAM network is also relying on redirection links to obfuscate the real URL of its websites. Strangely, X is not blocking these redirect URLs when they remove the accounts, allowing the operation to continue spreading their content.
It is critical to note that this does not imply that these accounts are operated by Russian influence operators. Instead, it should be interpreted as different groups exploiting the same vulnerability and platform design flaw.
X Takedowns are not helping
As very explicit CSAM accounts, the lifetime of any individual account is short. Initially, before the more intense takedown period, the posts were usually taken down within hours to a day. However, new accounts are created continuously, even suggesting some sort of automation, providing continuous access to CSAM content. Paradoxically, this modus operandi actually helps the content spread and makes it harder to gather evidence, as the continuous whack-a-mole-style deletion of individual accounts has the side effect of removing access to evidence from researchers, all while the links that the new accounts will continue to amplify are left unblocked by X. Therefore, the central issue is whether or not the exploitable vulnerability continues to persist.
To further support this evidence, two initial posts were flagged to X using their DSA Article 16 illegal content flagging tool, and while the posts were taken down instantly, the operation continued undisturbed for at least two-three days.
Despite X’s initial period of action, the actions seem to have stopped, and the operation resumed, meaning that the operation might not be systematically mitigated by X.
Conclusions
In this case, the CSAM CIB network can be proven to use some of the same vulnerabilities that Russian influence operations are using. This type of cross-disciplinary observation provides topic-agnostic evidence of potential systemic risk, as defined by the Digital Services Act Article 34 and 35 (DSA).
This also demonstrates the importance of working together across disciplines to analyse and address systemic risks, as such risks are not isolated to one theme or another. More information exchange between researchers working on information integrity and DSA enforcement is key to detecting these types of vulnerabilities.
X should focus on tracking and stopping manipulative or harmful patterns of behaviour, harmful links being shared, and should better respond to the shared characteristics of accounts being used in these types of operations to stop them, instead of focusing only on addressing individual accounts. A more systematic and systemic approach would be the only way to stop this network, rather than banning easily replaceable accounts, one by one, as they appear.
As highlighted, the analysed CSAM network follow specific patterns (low numbers of followers/following, quick posting, large amounts of spam comments to boost, use of similar wording, spread of similar links, and flooding a number of hashtags using the same posts). These patterns can potentially be translated into queries, as has been done in the past, and taken down.
This report was made possible through the Counter Disinformation Network.
The CDN is a collaboration and crisis response platform, knowledge valorisation resource, and expert network, bringing together 60+ organisations and over 300 practitioners from OSINT, journalism, fact-checking and academia from 25 countries. The network has been used to coordinate projects on four elections and has produced 80+ alerts since its creation in May 2024.
Alliance4Europe’s participation in the writing of this report was made possible by the Ministry of Foreign Affairs of the Republic of Poland
This report is a public task financed by the Ministry of Foreign Affairs of the Republic of Poland within the grant competition ‘Public Diplomacy 2024-2025 – the European dimension and countering disinformation.’
The opinions expressed in this publication are those of the authors and do not reflect the views of the official positions of the Ministry of Foreign Affairs of the Republic of Poland

Name of the task: Information Defence Alliance
Project financed from the state budget under the competition of the Minister of Foreign Affairs of the Republic of Poland “Public Diplomacy 2024–2025 – the European dimension and counteracting disinformation”
Amount of funding: 473 900 PLN
Brief description of the task: The Information Defence Alliance project aimed to monitor and mitigate influence operations targeting France, Italy, Germany, Moldova, Romania, Slovakia, and the Belarusian diaspora.
To do this, the project had three pillars:
1. researching influence operations,
2. inviting organisations and researchers from these countries to the CDN,
3. providing trainings to organisations to increase their capacity and share a common language.


