Select Language

English

Down Icon

Select Country

America

Down Icon

OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, Iran, N. Korea

OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, Iran, N. Korea

OpenAI, a leading artificial intelligence company, has revealed it is actively fighting widespread misuse of its AI tools by malicious groups from countries like China, Russia, North Korea, and Iran.

In a new report released earlier this week, OpenAI announced it has successfully shut down ten major networks in just three months, demonstrating its commitment to combating online threats.

The report highlights how bad actors are using AI to create convincing online deceptions. For instance, groups connected to China used AI to post mass fake comments and articles on platforms like TikTok and X, pretending to be real users in a campaign called Sneer Review.

This included a false video accusing Pakistani activist Mahrang Baloch of appearing in a pornographic film. They also used AI to generate content for polarizing discussions within the US, including creating fake profiles of US veterans to influence debates on topics like tariffs, in an operation named Uncle Spam.

North Korean actors, on the other hand, crafted fake resumes and job applications using AI. They sought remote IT jobs globally, likely to steal data. Meanwhile, Russian groups employed AI to develop dangerous software and plan cyberattacks, with one operation, ScopeCreep, focusing on creating malware designed to steal information and hide from detection.

An Iranian group, STORM-2035 (aka APT42, Imperial Kitten and TA456), repeatedly used AI to generate tweets in Spanish and English about US immigration, Scottish independence, and other sensitive political issues. They created fake social media accounts, often with obscured profile pictures, to appear as local residents.

AI is also being used in widespread scams. In one notable case, an operation likely based in Cambodia, dubbed Wrong Number, used AI to translate messages for a task scam. This scheme promised high pay for simple online activities, like liking social media posts.

The scam followed a clear pattern: a “ping” (cold contact) offering high wages, a “zing” (building trust and excitement with fake earnings), and finally, a “sting” (demanding money from victims for supposed larger rewards). These scams operated across multiple languages, including English, Spanish, and German, directing victims to apps like WhatsApp and Telegram.

OpenAI actively detects and bans accounts involved in these activities, using AI as a ‘force multiplier’ for its investigative teams, the company claims. This proactive approach often means many of these malicious campaigns achieved little authentic engagement or limited real-world impact before being shut down.

Adding to its fight against AI misuse, OpenAI is also facing a significant legal challenge regarding user privacy. On May 13, a US court, led by Judge Ona T. Wang, ordered OpenAI to preserve ChatGPT conversations.

This order stems from a copyright infringement lawsuit filed by The New York Times and other publishers, who allege OpenAI unlawfully used millions of their copyrighted articles to train its AI models. They argue that ChatGPT’s ability to reproduce, summarize, or mimic their content without permission or compensation threatens their business model.

OpenAI has protested this order, stating it forces the company to go against its commitment to user privacy and control. The company highlighted that users often share sensitive personal information in chats, expecting it to be deleted or remain private. This legal demand creates a complex challenge for OpenAI.

HackRead

HackRead

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow