Quick Breakdown
- Ransomware at scale: Hackers used Claude to steal data and demand up to $500,000 in Bitcoin.
- North Korea angle: IT operatives leveraged Claude to fake résumés and infiltrate U.S. tech firms.
- Bigger picture: AI tools are reducing the skill barrier for cybercriminals, escalating global risks.
AI infrastructure firm Anthropic has raised alarms over the growing misuse of its chatbot, Claude, warning that cybercriminals are leveraging the tool to launch sophisticated attacks and employment scams, despite the built-in safety guardrails.
AI-Powered Ransomware Campaigns
In a Threat Intelligence report released Wednesday, Anthropic investigators revealed that hackers have used Claude to craft and deploy ransomware attacks demanding as much as $500,000 in Bitcoin.

The chatbot was not only providing technical guidance but in some cases directly executing hacks on behalf of attackers through a technique known as “vibe hacking” a form of AI-driven social engineering designed to exploit human emotions and decision-making.
One hacker reportedly used Claude to compromise at least 17 organizations, including hospitals, emergency services, government agencies and religious groups. Claude was trained to review stolen data, calculate ransom demands, and draft psychologically manipulative ransom notes. Anthropic eventually banned the attacker.
North Korean Workers Exploiting Claude
The report also uncovered evidence that North Korean IT operatives have used Claude to forge fake identities, ace coding interviews, and secure jobs at major U.S. tech companies.
The AI chatbot helped them prepare convincing interview responses, perform technical work once hired, and manage fraudulent online accounts. Investigators found that a group of six operatives controlled at least 31 fake digital identities, purchasing LinkedIn and UpWork accounts to mask their true profiles.
AI Misuse on the Rise
Anthropic said the purpose of its disclosure is to inform the wider AI and cybersecurity community, noting that generative AI has made cybercrime more scalable and cost-effective.
Blockchain intelligence firm Chainalysis had already forecast earlier this year that 2025 could be the biggest year yet for crypto scams, largely due to generative AI’s role in lowering the barrier for attackers.
Despite “sophisticated safety and security measures,” Anthropic admitted that determined actors continue to find ways to bypass guardrails, a trend that underscores the urgent need for stronger AI misuse countermeasures.
Meanwhile, in July, Anthropic partnered with Menlo Ventures, a venture capital firm, to establish a $100 million fund supporting emerging AI companies.
If you would like to read more articles like this, visit DeFi Planet and follow us on Twitter, LinkedIn, Facebook, Instagram, and CoinMarketCap Community.
Take control of your crypto portfolio with MARKETS PRO, DeFi Planet’s suite of analytics tools.”