Cybersecurity researchers have revealed details of a patched security flaw affecting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker command-line interface (CLI), which can be used to execute code and exfiltrate sensitive data.
Critical vulnerability has been codenamed dockerdash By cyber security company Noma Labs. This was addressed by Docker with the release of version 4.50.0 in November 2025.
“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-step attack: Gordon AI reads and interprets the malicious directive, forwarding it to the MCP [Model Context Protocol] gateway, which then executes it through the MCP tool,” Sasi Levy, head of security research at Noma, said in a report shared with The Hacker News.
“Each step occurs with zero verification, leveraging existing agents and the MCP gateway architecture.”
Successful exploitation of the vulnerability could result in significant-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications.
The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, which allows it to propagate through different layers without any verification, allowing an attacker to overcome security boundaries. The result is that a simple AI query opens the door to tool execution.
With the MCP acting as a connective tissue between a larger language model (LLM) and the local environment, the issue is a failure of contextual trust. The problem has been described as a case of meta-context injection.
“The MCP gateway cannot distinguish between informational metadata (like standard Docker labels) and pre-authorized, runnable internal instructions,” Levy said. “By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.”
In a hypothetical attack scenario, a threat actor could exploit a critical trust boundary violation in the way Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with instructions embedded in the Dockerfile label field.
Although metadata fields may seem innocuous, when processed by Ask Gordon AI they become vectors for injection. The code execution attack chain is as follows −
- The attacker publishes a Docker image containing weaponized LABEL directives in the Dockerfile
- When a victim asks Ask Gordon AI about an image, Gordon reads the image metadata, including all LABEL fields, taking advantage of Ask Gordon’s inability to distinguish between legitimate metadata descriptions and embedded malicious instructions.
- Ask Gordon to forward the parsed instructions to the MCP Gateway, a middleware layer that sits between the AI agents and the MCP servers.
- The MCP gateway interprets this as a standard request from a trusted source and invokes the specified MCP tool without any additional verification.
- The MCP tool executes commands with the victim’s Docker privileges, allowing code execution.
The data exfiltration vulnerability weaponizes the same instant injection flaw, but targets the Docker desktop implementation of Ask Gordon to capture sensitive internal data about the victim’s environment using the MCP tool by leveraging the read-only permissions of the assistant.
The information collected may include details about installed tools, container details, Docker configuration, mounted directories, and network topology.
It’s worth noting that Ask Gordon version 4.50.0 also resolves an instant injection vulnerability discovered by Pillar Security that could allow attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions.
“The DockerDash vulnerability underscores your need to consider AI supply chain risk as the core threat existing,” Levy said. “This proves that your trusted input sources can be used to hide malicious payloads that easily manipulate the AI’s execution path. Mitigating this new class of attacks requires applying zero-trust validation to all relevant data provided to AI models.”