SlowMist Issues Warning: AI Coding Tools Pose Silent Threat to Cryptocurrency Security

TLDR

  • SlowMist has identified a significant vulnerability in AI coding tools that poses a threat to the security of cryptocurrency developer systems.
  • This vulnerability allows malware to execute automatically when developers open untrusted project folders.
  • Demonstrations have shown that AI coding tools, including Cursor, are particularly susceptible to this flaw.
  • Attackers are embedding malicious prompts within files such as README.md and LICENSE.txt, which AI tools interpret as executable instructions.
  • Previously, North Korean threat groups have utilized smart contracts to deploy malware discreetly, leaving no trace on blockchain networks.

A recent alert from SlowMist highlights a critical vulnerability in AI coding tools, placing developer systems at immediate risk. Attackers can now exploit trusted development environments without detection, jeopardizing crypto projects, digital assets, and developer credentials.

AI Tools Executing Malicious Code Through Routine Operations

SlowMist has cautioned that AI coding assistants can be compromised through hidden instructions embedded in common project files like README.md and LICENSE.txt.

This vulnerability is triggered when users open a project folder, enabling malware to execute commands on macOS or Windows systems without requiring any user confirmation.

The absence of a confirmation prompt makes this attack particularly dangerous for cryptocurrency development environments that may contain sensitive data or digital wallets.

The attack method, known as the “CopyPasta License Attack,” was initially revealed by HiddenLayer in September following extensive research into embedded markdown payloads.

Attackers exploit how AI tools interpret markdown files by concealing malicious prompts within comments, which AI systems then process as code instructions.

According to HiddenLayer’s technical report, Cursor, a widely used AI-assisted coding platform, has been confirmed as vulnerable, along with Windsurf, Kiro, and Aider.

The malware is executed when AI agents process these instructions and copy them into the codebase, leading to silent compromise of entire projects.

HiddenLayer stated, “Developers are exposed even before writing any code,” adding that “AI tools become unintentional delivery vectors.”

Cursor users face the highest risk, as demonstrated in controlled tests that showed complete system compromise after merely accessing a project folder.

State-Backed Attacks on Crypto Projects Intensify

North Korean threat actors are increasingly targeting blockchain developers with novel methods to embed backdoors within smart contracts.

Google’s Mandiant team has reported that the UNC5342 group has deployed malware, including JADESNOW and INVISIBLEFERRET, across Ethereum and other networks.

This technique involves storing payloads in read-only functions to evade transaction logs and bypass standard blockchain monitoring.

Developers inadvertently execute malware by interacting with these smart contracts through decentralized platforms or tools.

Modular malware strains named BeaverTail and OtterCookie were employed in phishing campaigns that posed as job interviews for crypto engineers.

These attacks utilized fictitious companies, such as Blocknovas and Softglide, to distribute malicious code via NPM packages.

Silent Push researchers traced both entities to vacant properties, uncovering that they were fronts for the “Contagious Interview” malware operation.

Once a system is infected, compromised machines transmit credentials and codebase information to attacker-controlled servers using encrypted communication channels.

AI-Powered Exploits and Scams Escalate Rapidly

Recent testing by Anthropic revealed that AI tools successfully exploited half of the smart contracts in its SCONE-bench benchmark, simulating potential damages of $550.1 million.

Models like GPT-4 and GPT-5 identified working exploits in 19 smart contracts deployed after their respective training data cutoffs.

Two zero-day vulnerabilities were discovered in active Binance Smart Chain contracts valued at $3,694, with the model API cost for their identification being $3,476.

The study indicated that the speed of exploit discovery doubled monthly, while the cost per working exploit significantly decreased.

Chainabuse has reported a 456% year-over-year increase in AI-driven crypto scams by April 2025, largely propelled by deepfake videos and voice cloning technology.

Scam wallets received 60% of all deposits originating from AI-generated campaigns, which featured convincing fake identities and real-time automated responses.

Attackers are now deploying bots to simulate technical interviews, luring developers into downloading disguised malware tools.

Despite these escalating risks, cryptocurrency-related hacks saw a 60% decrease in December, totaling $76 million, down from $194.2 million in November, according to PeckShield data.