N. Korean hacking group uses AI deepfake to target S. Korean institutions

Seoul, Sep 15 (IANS) A North Korea-linked hacking group has carried out a cyberattack on South Korean organisations, including a defence-related institution, using artificial intelligence (AI)-generated deepfake images, a report showed on Monday.

Kimsuky group, a hacking unit believed to be sponsored by the North Korean government, attempted a spear-phishing attack on a military-related organisation in July, according to the report by the Genians Security Center (GSC), a South Korean security institute, reports Yonhap news agency.

Spearphishing is a targeted cyberattack, often conducted through personalised emails that impersonate trusted sources.

The report said the attackers sent an email attached with malicious code, disguised as correspondence about ID issuance for military-affiliated officials. The ID card image used in the attempt was presumed to have been produced by a generative AI model, marking a case of the Kimsuky group applying deepfake technology.

Typically, AI platforms, such as ChatGPT, reject requests to generate copies of military IDs, citing that government-issued identification documents are legally protected.

However, the GSC report noted that the hackers appear to have bypassed restrictions by requesting mock-ups or sample designs for “legitimate” purposes, rather than direct reproductions of actual IDs.

The findings follow a separate report published in August by U.S.-based Anthropic, developer of the AI service Claude, which detailed how North Korean IT workers have misused AI.

That report said the workers generated manipulated virtual identities to undergo technical assessments during job applications, part of a broader scheme to circumvent international sanctions and secure foreign currency for the regime.

GSC said such cases highlight North Korea’s growing attempts to exploit AI services for increasingly sophisticated malicious activities.

“While AI services are powerful tools for enhancing productivity, they also represent potential risks when misused as cyber threats at the level of national security,” it said.

“Therefore, organisations must proactively prepare for the possibility of AI misuse and maintain continuous security monitoring across recruitment, operations and business processes.”

—IANS

na/

Previous post <div>Teenager injured in landmine blast in J&K’s Anantnag succumbs to injuries in Srinagar hospital</div>
Next post ‘You can change the man’: Amorim after Manchester Derby defeat