Cybersecurity researchers have uncovered multiple security vulnerabilities in Anthropic’s Cloud Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials.
“The vulnerabilities exploit various configuration mechanisms, including hooks, Model Context Protocol (MCP) servers, and environment variables – allowing users to clone and open untrusted repositories, execute arbitrary shell commands, and exfiltrate anthropogenic API keys,” Check Point Research said in a report shared with The Hacker News.
The deficiencies identified fall into three broad categories –
- no cve (CVSS Score: 8.7) – A code injection vulnerability stemming from user consent when starting cloud code in a new directory that could result in arbitrary code execution without additional confirmation via untrusted project hooks defined in .cloud/settings.json. (Fixed in version 1.0.87 in September 2025)
- CVE-2025-59536 (CVSS score: 8.7) – A code injection vulnerability that allows the execution of arbitrary shell commands automatically upon tool initialization when a user starts cloud code in an untrusted directory. (Fixed in version 1.0.111 in October 2025)
- CVE-2026-21852 (CVSS Score: 5.3) – An information disclosure vulnerability in the project-load flow of cloud code that allows a malicious repository to exfiltrate data, including anthropic API keys. (Fixed in version 2.0.65 in January 2026)
“If a user initiated cloud code into an attacker-controlled repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, the cloud code would issue API requests before showing the trust prompt, including potentially leaking the user’s API key,” Anthropic said in an advisory for CVE-2026-21852.
In other words, simply opening a crafted repository is enough to pull out a developer’s activated API key, redirect authenticated API traffic to the external infrastructure, and capture credentials. This, in turn, could allow the attacker to penetrate deeper into the victim’s AI infrastructure.
This could potentially include accessing shared project files, modifying/deleting cloud-stored data, uploading malicious content, and even generating unexpected API costs.
Successful exploitation of the first vulnerability can trigger covert execution on the developer’s machine without any additional interaction other than launching the project.
CVE-2025-59536 also achieves a similar target, with the main difference being that repository-defined configurations defined via the .mcp.json and claude/settings.json files can be used by an attacker to override explicit user approval before interacting with external devices and services via the Model Context Protocol (MCP). This is achieved by setting the “enableAllProjectMcpServers” option to true.
“As AI-powered devices gain the ability to execute commands, initiate external integrations, and initiate network communications autonomously, configuration files effectively become part of the execution layer,” Check Point said. “What was once considered the operational context now directly influences system behavior.”
“This fundamentally changes the threat model. The risk is no longer limited to running untrusted code – it now extends to opening untrusted projects. In an AI-powered development environment, the supply chain starts not only with the source code, but also with the automation layers around it.”