The information space of Europe has become another battlefield, where, in addition to traditional propaganda, artificial intelligence technologies have begun to be used, which help manipulate information flows and present them in a way that is beneficial to the Russian Federation. In Russia, these technologies have been mastered at a high level and put on stream. They are part of the Kremlin’s broader strategy of manipulating facts and destabilizing international relations. According to the doctrine of hybrid warfare, also known as the “Gerasimov doctrine” (of the Chief of the Russian General Staff), the information space is considered a full-fledged theater of hostilities. The goal of these measures is to destabilize international politics by undermining trust in democratic institutions through the spread of unreliable information, as well as misleading people.
The use of deepfake videos, pranks, and neural bot farms allows us to create a parallel reality in which truth cannot be distinguished from fiction. If a few years ago we saw primitive attempts to create fake videos, where obvious artificiality and blunders were striking, now the quality of generation has reached a level where they cannot be distinguished with the naked eye and without the use of special software.
In March 2025, the European External Action Service (EEAS) published a report stating that over 60% of disinformation campaigns in the EU aim not only to promote pro-Russian narratives, but also to provoke internal divisions in NATO and the EU, in particular on issues of support for Ukraine. In its information campaigns, the Kremlin aimed to discredit individual European politicians, as well as the Ukrainian authorities. In fact, deepfake videos and bot farms became the main tool in spreading disinformation and manipulating public opinion.
In June 2024, the mayors of Berlin, Madrid and Vienna became targets of just such an attack. They received a call from a person posing as the mayor of the Ukrainian capital, Vitali Klitschko. The attacker asked the interlocutors various provocative questions about military aid to Ukraine and tried to find out the plans of European communities to provide assistance to Kyiv. Germany’s Federal Office for the Protection of the Constitution (BfV) then determined that it was a real-time deepfake created by the Russian group Storm-1516, which specializes in spreading disinformation and Russian propaganda.
During the Moldovan elections in September 2025, Russian intelligence agencies used a network of TikTok channels to broadcast an AI-generated address by President Maia Sandu. In the video, the fake “president” called for abandoning the EU path to “avoid war,” for immediate negotiations with Moscow, and for the recognition of occupied Ukrainian territories. The AI flawlessly reproduced her voice and facial expressions. Although the French agency for detecting foreign information interference (VIGINUM) quickly proved the video to be fake, it was viewed by more than 15 million people in 48 hours.
The fabrication of fakes in Russia is rampant and in recent years their level has reached a qualitatively new level. They are engaged in various units of the Russian special services and organizations controlled by the Kremlin, the most famous of which is the “Agency for Social Analysis and Forecasting” (ASAF). In 2024, it developed a disinformation campaign called “Doppelganger”, the purpose of which was to destabilize the situation in the countries of the European Union and discredit Ukraine through the spread of fake news. The technologies used to create identical copies of leading European media such as Le Monde, Der Spiegel, The Guardian, Bild and others in a matter of minutes. They registered domains that were as similar as possible to the original (for example, lemonde.ltd instead of lemonde.fr). AI automatically copied the layout, fonts and current news from the real site, adding only one fake that the Kremlin needed.
The materials were prepared using artificial intelligence using LLM models adapted to the style of specific publications. The distribution is carried out through targeted advertising on Facebook, which leads to these mirrors. In the first four months of 2024, the army of bots generated 34 million comments and almost 40 thousand elements of content for social networks. The campaign was large-scale, because in just one week of October 2025, more than 3,000 unique domains were discovered that imitated European government institutions and media.
Another famous revelation concerned the Portal Kombat network. In February 2024, the French agency VIGINUM published an investigation into a network consisting of 193 sites that imitated local European news. Unlike “Doppelganger”, it focuses not on one high-quality fake, but on the distribution of a large number of them. The system automatically collects posts from Russian Telegram channels, translates them using AI into the language of the target country (for example, English, French, German, Spanish and Polish) and instantly publishes them on all 193 sites simultaneously. The purpose of such an operation is to create the illusion that “all the media are talking about it”.
The main feature is the biased presentation of information in a context favorable to the Kremlin and obvious pro-Russian narratives. The Microsoft Digital Defense Report for 2025 reported 100% involvement of the Russian Federation in the “Structure” network, which is managed by the IT company SDA (Social Design Agency). The German analytical center CeMAS recorded the activity of more than 50 thousand bots during working hours in the Moscow time zone from 9:00 to 18:00. On weekends and holidays, activity fell by 90%, and during non-working hours it almost completely. The researchers also noted that the registration of bots and clone sites is carried out through identical cryptocurrency wallets that were previously used to pay for the servers of the APT28 group (GRU unit 26165). Neural networks often use Russian metatags (for example, Cyrillic characters in the page code) and tracings from the Russian language, which are not typical for speakers of European languages.
The Kremlin’s main goal is not so much to make people believe lies as to make them stop believing anything. This has created a phenomenon known as the “liar’s dividend” – the ability for attackers to instantly declare any inconvenient but authentic audio or video recording to be false. Today, verification has become almost impossible without special tools. That is why critical thinking and multi-level fact-checking (cross-checking through several independent sources) are the only way to verify the reliability of information. It is important to understand that fakes always appeal to emotions (anger, fear, panic), while the truth requires rational analysis and time to establish.
The EU responded to these challenges by introducing the “Artificial Intelligence Act” (EU AI Act) in 2024, which this year received an amendment regarding the mandatory labeling of any synthetic content. The French VIGINUM and the German BaFin introduced a “digital quarantine” system – automatic blocking of domains that show signs of manipulative interference. In 2025, the UK adopted an updated National Security Act, according to which the creation and distribution of deepfakes in the interests of foreign intelligence is tantamount to treason. The C2PA (Content Provenance and Authenticity) standard is also being introduced, which allows each user to check the origin and editing history of any photo or video. There are still many legislative and technological initiatives ahead that will be aimed at combating fakes and disinformation, and the main challenge is to know the truth in the post-truth era.
