REGISTER

email 14 48

A new report from AI startup Anthropic warns that cybercriminals are weaponizing AI assistants in increasingly sophisticated ways. In one case, attackers used Anthropic’s own coding tool, Claude Code, to carry out nearly every stage of a large-scale data extortion campaign targeting at least 17 organizations across multiple industries.

By feeding the AI a detailed instructions file disguised as a legitimate security contract, the hackers leveraged Claude to plan attacks, extract data, escalate privileges, and even generate customized ransom notes. Unlike traditional ransomware, the operation relied on stolen data for extortion rather than encrypting systems.

Anthropic also highlighted other misuse of its AI, including a scheme to help North Korean IT workers fraudulently secure jobs at international companies by generating convincing resumes, interview responses, and technical work. In parallel, researchers at ESET uncovered proof-of-concept ransomware dubbed PromptLock, which uses a large language model to generate malicious scripts for Windows, Linux, and macOS. Though still in early stages, this malware demonstrates how generative AI can be directly integrated into attack frameworks, enabling automated reconnaissance, malware development, and tailored extortion strategies.

These findings underscore how generative AI is lowering the technical barriers to cybercrime, giving even low-skilled actors access to advanced attack capabilities. As Anthropic notes, assumptions that complex attacks require equally sophisticated hackers no longer hold when AI can deliver instant expertise. Security researchers warn this trend could lead to an escalating arms race between AI-powered attackers and defenders, with detection and prevention proving increasingly difficult.

CyberBanner

Log in Register

Please Login to download this file

Username *
Password *
Remember Me

CyberBanner

CyberBanner

Go to top