
North Korea’s state-sponsored Kimsuky hacking group has deployed ChatGPT to generate deepfake South Korean military ID cards as part of a targeted phishing campaign against defense organizations. First detected in July 2025 by Genians Security Center, the operation marks a worrying evolution in how nation-state actors weaponize AI for cyber espionage.
By framing requests as “mock-ups” or “sample designs for legitimate purposes,” attackers bypassed ChatGPT’s safeguards against creating government documents. Metadata embedded in the ID images referenced “GPT4oOpenAI,” and deepfake detection tools logged a 98% probability that the cards were AI-generated, highlighting sophisticated abuse of GPT-4o.
Sophisticated AI-Powered Deception Campaign
Malicious emails masqueraded as official South Korean defense communications, using domains like “.mli.kr” to imitate legitimate “.mil.kr” addresses. Recipients found compressed attachments seemingly containing draft military IDs, but opening them unleashed malware capable of data extraction and remote system control.
Analysis by security researchers revealed that the forged IDs were just the first step in a larger social engineering scheme. Once trust was established, the hackers attempted to harvest credentials and deploy custom backdoors tailored to defense-grade infrastructure via Yahoo Finance.
The campaign underscores the challenge of distinguishing AI-crafted forgeries from authentic documents, even with advanced detection software, and demonstrates how AI can amplify traditional phishing with near-perfect impersonation.
Expanding Pattern of North Korean AI Exploitation
This operation is part of a broader trend in which North Korean cyber units leverage AI across their workflows. An August 2025 report from Anthropic shows that Kimsuky and affiliated groups routinely use Claude AI to craft convincing résumés, pass coding tests, and maintain cover at Fortune 500 firms—80% of Claude usage aligns with active employment maintenance.
Mun Chong-hyun, director at Genians, observes that adversaries now integrate AI into every phase of an attack, from scenario planning and malware creation to impersonating recruiters. The US Department of Homeland Security has assessed that Kimsuky “is most likely tasked by the North Korean regime with a global intelligence-gathering mission,” illustrating strategic adoption of AI tools.
Targeted Victims and Defense Implications
The phishing efforts primarily targeted South Korean journalists, researchers, and human-rights activists, aiming to infiltrate networks concerned with defense and North Korea. Although the precise number of victims remains unknown, the campaign’s use of high-fidelity deepfakes suggests a high-stakes operation.
Genians warns that as AI services become ubiquitous in business, they also pose new risks for national security. Organizations are urged to deploy endpoint detection and response solutions and maintain continuous monitoring to counter increasingly sophisticated AI-enhanced threats.