New research has found that Google Cloud API keys, commonly designated as project identifiers for billing purposes, can be misused to authenticate sensitive Gemini endpoints and access private data.
The findings come from Truffle Security, which discovered approximately 3,000 Google API keys (identified by the prefix “AIza”) embedded in client-side code to provide Google-related services like embedded maps on websites.
Security researcher Joe Lyons said, “With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage from your account.” “Now the key also authenticates Gemini, even though they were never intended for that,” he said.
The issue occurs when users enable the Gemini API on a Google Cloud project (i.e., the Generative Language API), causing existing API keys in that project, including those accessible through website JavaScript code, to gain covert access to Gemini endpoints without any warning or notice.
This effectively allows any attackers scraping websites to obtain such API keys and use them for nefarious purposes and quota theft, including accessing sensitive files via the /file and /cachedcontents endpoints, as well as making Gemini API calls, racking up huge bills for victims.
Additionally, Truffle Security found that creating a new API key in Google Cloud is “unrestricted” by default, meaning it is applicable to every enabled API in the project, including Gemini.
“The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials on the public internet,” Lyons said. In total, the company said it found 2,863 live keys on the public Internet, including from a website linked to Google.
The revelation came after Quokka published a similar report, which found more than 35,000 unique Google API keys embedded in a scan of 250,000 Android apps.
“Beyond potential cost abuse through automated LLM requests, organizations should also consider how AI-enabled endpoints might interact with signals, generated content, or connected cloud services that expand the blast radius of a compromised key,” the mobile security company said.
“Even if no direct customer data is accessible, the combination of estimated access, quota consumption, and potential integration with broader Google Cloud resources creates a risk profile that is materially different from the original billing-identifier model developers expected.”
Although this behavior was initially thought to be intended, Google has taken steps to address the issue.
“We are aware of this report and have worked with researchers to resolve the issue,” a Google spokesperson told The Hacker News via email. “The security of our users’ data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys attempting to access the Gemini API.”
It is currently not known if this issue was ever exploited in the wild. However, in a Reddit post published two days ago, a user claimed that a “stolen” Google Cloud API key resulted in being charged $82,314.44 between February 11 and 12, 2026, which is more than the regular spend of $180 per month.
We’ve contacted Google for further comment, and will update the story if we hear back.
Users who have installed Google Cloud Projects are advised to check their APIs and services, and verify whether artificial intelligence (AI)-related APIs are enabled. If they are enabled and publicly accessible (either in client-side JavaScript or checked into a public repository), make sure the keys are rotated.
“Start with your oldest keys first,” says Truffle Security. “They were most likely deployed publicly under the old guidance that it was safe to share API keys, and retroactively gained Gemini privileges when someone on your team enabled the API.”
“This is a great example of how dynamic the risk is, and how an API can be over-permitted after the fact,” Tim Erlin, security strategist at Volarm, said in a statement. “Security testing, vulnerability scanning and other assessments must be ongoing.”
“APIs are particularly tricky because changes to their operation or the data they can access are not necessarily vulnerabilities, but they can directly increase risk. Adopting and using AI that runs on these APIs accelerates the problem. Finding vulnerabilities for APIs is actually not enough. Organizations must profile behavior and data access, identify anomalies, and proactively prevent malicious activity.”