Select Language

English

Down Icon

Select Country

Italy

Down Icon

The Dark Side of AI: How ChatGPT Fell into the Hands of Hackers, Spies, and Authoritarian Regimes

The Dark Side of AI: How ChatGPT Fell into the Hands of Hackers, Spies, and Authoritarian Regimes

When a prompt on ChatGPT becomes a propaganda tool and a starting point for a cyber attack, we are no longer in the realm of hypotheses: it is already happening on a global scale. OpenAI, in the report “Disrupting Malicious Uses of AI: June 2025” , explains how its artificial intelligence models have been exploited by malicious actors around the world to organize scams, disinformation campaigns, espionage operations and cyber attacks.

The document reconstructs in detail a series of activities identified and interrupted in recent months. Actions ranging from digital crimes to advanced social engineering, up to real covert influence operations . Fake resumes generated in North Korea to obtain remote jobs, pro-Kremlin campaigns relaunched on Telegram, Filipino bots praising the Marcos government, malware written line by line thanks to generative AI.

A qualitative leap in the abuse of artificial intelligence, which today allows cybercriminals and authoritarian regimes to replicate and amplify their actions with the same ease with which a common user writes an email or asks for a summary.

But while AI is increasing the effectiveness and scale of attacks, it is also providing new tools to counter them. Every prompt sent, every anomalous use of models leaves behind a trail of digital signals: usage patterns, log traces, behavioral anomalies. These clues become valuable sources of analysis for security teams, who can then identify emerging threats, block suspicious accounts, and effectively strengthen defenses.

Disinformation, Hacking and Scams: The Diary of AI Attacks

From political phishing to pyramid schemes, OpenAI’s report highlights the surprising versatility with which AI is being exploited by malicious actors in every corner of the planet.

In North Korea, suspected regime-linked operatives reportedly used ChatGPT to generate fake identities, plausible resumes, and credible LinkedIn profiles to secure remote jobs at foreign , particularly U.S., companies. In some cases, the contracts also included the delivery of company devices — such as laptops — that would then be remotely controlled, potentially allowing access to sensitive digital infrastructure.

Beijing, under Operation Sneer Review, reportedly fueled pro-government social media campaigns on the Taiwan issue, bouncing canned posts and comments on TikTok, Reddit, and X to steer the global conversation.

In Manila, the “High Five” campaign would have turned AI into an electoral sounding board: likes, emojis and pro-Marcos slogans would have come from fake accounts operated by a marketing agency. In parallel, the hacker groups APT5 and APT15 — both linked to China — would have used linguistic models to brute force credentials and map strategic infrastructure in the United States , including military networks.

There is no shortage of divide-and-rule attempts: with “Uncle Spam,” fake American veterans allegedly spread conflicting messages on X and Bluesky to exacerbate internal fractures. Finally, in Cambodia, the “Wrong Number” scam promised easy earnings via chat: automatic messages lured victims, asked for an “advance,” and then pushed them to recruit new followers, fueling a lucrative pyramid scheme.

Fake Journalists With ChatGPT: Academics And Policy Makers Spied On

Among the most sophisticated cases described in the report is the “VAGue Focus” operation, attributed to actors with alleged ties to China. Posing as freelance journalists or analysts from non-existent research centers, the perpetrators would have contacted Western experts, academics and public officials. The goal was to collect confidential information on sensitive topics for Beijing, such as US policies towards Taiwan or the internal dynamics of European institutions. ChatGPT was used to write realistic messages, simulate journalistic language , generate names and cover biographies, and automatically translate texts. In some cases, small sums were offered in exchange for interviews or written documents. In others, the requests were more invasive, such as access to sensitive materials that could be reused for strategic analysis or counter-information operations .

AI and disinformation: German elections in the crosshairs

The report also describes a campaign suspected of wanting to influence the 2025 German federal elections . The operation, which OpenAI cautiously traces back to pro-Russian networks, allegedly spread slogans, memes and pseudo-journalistic articles via Telegram channels and the Pravda DE website, to support the AfD, criticize NATO and delegitimize Berlin. Generative AI was crucial in calibrating the messages in a natural and culturally coherent German.

From Development to Debugging: AI at the Service of Malware

Another notable case is the “ScopeCreep” operation, which the report claims was carried out by an actor with possible ties to Russia. Hackers allegedly used ChatGPT to develop a multi-stage malware disguised as a legitimate gamer tool. The code was written in Go — a programming language created by Google, praised for its speed and efficiency — and included PowerShell scripts, which are sequences of commands to automate tasks on Windows computers. Artificial intelligence was allegedly used not only to write the code, but also to refine it, fix bugs, and find ways to bypass security controls like Windows Defender. This is an example of how generative models can become real assistants in the creation of malicious software.

La Repubblica

La Repubblica

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow