Google said Thursday it has monitored a threat actor linked to North Korea UNC2970 is using its generic artificial intelligence (AI) model Gemini to conduct reconnaissance on its targets, as various hacking groups continue to weaponize the tool to accelerate various stages of the cyberattack life cycle, enable information operations, and even conduct model extraction attacks.
“The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance,” the Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. “This actor’s targeted profile included searching for information about major cybersecurity and defense companies and mapping specific technical job roles and salary information.”
The tech giant’s threat intelligence team characterized this activity as a blurring of the boundaries between routine professional research and malicious reconnaissance, allowing state-backed actors to create tailored phishing personas and identify soft targets for initial compromise.
UNC2970 is the alias assigned to a North Korean hacking group that overlaps with a cluster tracked as the Lazarus Group, Diamond Sleet, and Hidden Cobra. It is known to operate a long-running campaign called Operation Dream Job to target the aerospace, defense and energy sectors with malware under the guise of contacting victims under the pretext of job openings.
GTIG said UNC2970 has “relentlessly” focused on defense targeting and impersonating corporate recruiters in its campaigns, with target profiling involving “information about major cybersecurity and defense companies and mapping to specific technical job roles and salary information.”
UNC2970 is not the only threat actor that has abused Gemini to enhance its capabilities and move from initial reconnaissance to active targeting at a faster clip. Some other hacking teams that have integrated the tool into their workflow are as follows –
- UNC6418 (unrestricted), to collect targeted intelligence, particularly by looking for sensitive account credentials and email addresses.
- Temp.HEX or Mustang Panda (China), to compile a dossier on specific individuals, including targets in Pakistan, and to gather operational and structural data on separatist organizations in various countries.
- APT31 or Judgment Panda (China), claiming to be a security researcher to automate the analysis of vulnerabilities and create targeted testing plans.
- apt41 (China), open-source tool for extracting explanations from README.md pages as well as troubleshooting and debugging exploit code.
- UNC795 (China), to troubleshoot their code, conduct research, and develop web shells and scanners for PHP web servers.
- apt42 (Iran), to facilitate reconnaissance and targeted social engineering by creating personas that drive engagement with targets, as well as developing a Python-based Google Maps scraper, developing a SIM card management system in Rust, and researching the use of a proof-of-concept (PoC) for the WinRAR flaw (CVE-2025-8088).
Google also said it detected a malware called HONESTCUE that leverages Gemini’s API to outsource functionality generation for the next step, as well as an AI-generated phishing kit codenamed COINBAIT that is built using Lovable AI and masquerades as a cryptocurrency exchange for credential harvesting. Some aspects of COINBAIT-related activity have been attributed to a financially motivated threat group called UNC5356.
“HONESTCUE is a downloader and launcher framework that sends a signal through Google Gemini’s API and receives C# source code as a response.” “However, instead of leveraging LLM to update itself, HONESTCUE calls the Gemini API to generate code that drives ‘stage two’ functionality, which downloads and executes a second piece of malware.”
HONESTCUE’s fileless secondary stage takes generated C# source code obtained from the Gemini API and uses the legitimate .NET CSharpCodeProvider framework to compile and execute the payload directly in memory, leaving no artifacts on disk.
Google has also drawn attention to a recent wave of ClickFix campaigns that take advantage of the public sharing feature of generic AI services to host realistic-looking instructions for fixing a common computer problem and ultimately distribute information-stealing malware. This activity was flagged by Huntress in December 2025.
Finally, the company said it identified and disrupted model extraction attacks, which aim to systematically query a proprietary machine learning model to extract information and build an option model that reflects the target’s behavior. In such a large-scale attack, Gemini was targeted by over 100,000 prompts, posing a series of questions intended to replicate the model’s reasoning ability across a wide range of tasks in non-English languages.
Last month, Praetorian devised a PoC extraction attack where a replication model achieved an accuracy rate of 80.1% by sending a series of 1,000 queries to the victim’s API and recording the output and training for 20 epochs.
“Many organizations believe that keeping model weights private is sufficient security,” said security researcher Farida Shafiq. “But this creates a false sense of security. In reality, the behavior itself is the model. Each query-response pair is a training example for a replica. The model’s behavior is exposed through each API response.”