Openai on Friday revealed that it banned a set of accounts that used their Chatgpt tool to develop a suspected artificial intelligence (AI) -Pover monitoring equipment.
Social media hearing equipment is likely to originate from China and is operated by one of the Meta Lama models, using the AI company model to generate detailed descriptions with accounts and collect real -time data In order to analyze documents for a device capable in and documents are analyzed. Report of anti -China protests in the West and insight with Chinese authorities.
Network behavior in promoting and reviewing the campaign “promoting and reviewing surveillance tooling,” researchers Ben Nimmo, Albert Zhang, Matthew Richard and Nathaniel Hartley said that the tool is designed to connect and analysis of posts and comments from platforms Is designed to do. Such as X, Facebook, YouTube, Instagram, Telegram and Redit.
In an example flagged by the company, the actors used chats to debug and modify the source code, considered to run monitoring software, called “Qianyue Overseas Public Opinion AI Assistant”.
Publicly available information about think tanks in the United States, and using your model as a research tool for public information about government officials and politicians in countries like Australia, Cambodia and United States. Apart from doing, the cluster has also been found to use chats to read, translate and analyze the screenshot of English-language documents.
Some images were declared opposition to Urghur rights in various western cities, and were possibly copied from social media. It is not currently known whether these pictures were authentic.
Openai also said that it disrupted several other groups that were found misusing Chatgpt for various malicious activities –
- Misleading employment scheme – North Korea’s network fraud is associated with the IT worker scheme that was involved in the construction of individual documents for fictional job applicants, such as resumes, online job profiles and cover letters, as well as convincing to explain like unusual practices like unusual practices Reactions avoid video calls, reaching corporate systems from unauthorized countries or working irregular hours. Some fake job applications were then shared on LinkedIn.
- Sponsored dissatisfaction – A network possibility of Chinese origin that was involved in the creation of social media content in English and was involved in long -term articles in Spanish which were important in the United States, and later by Latin American news websites in Peru, Mexico and Ecuador Was published. Some activity overlaps with a known activity cluster dub.
- Romance-biting scam – A network of accounts that were included in the translation and generation of Japanese, Chinese and English, to post on social media platforms including suspected Cambodia-Ug romance and investment scams, including Facebook, X and Instagram.
- Iranian influence nexus -A network of five accounts that were included in the generation of X posts and articles, which were Palestinians, Supporters-Iran, and anti-US and anti-US, and shared on websites related to an Iranian influence Was. The operation was tracked as the International Union of Virtual Media (IUVM) and Storm -2035. One of the restricted accounts was used to create materials for both operations, indicating “pre -unaffected relationship”.
- Kimsuky and Bluenoroff -A network of accounts run by North Korean Danger Actors involved in collecting information related to cyber infiltration equipment and cryptocurrency-related subjects, and remote desktop protocol (RDP) debugging cod
- Youth initiative secret impact operation – A network of accounts that were involved in the creation of English-language articles for a website called “powerful Ghana” and social media comments targeting the Ghana presidential election
- Task scam – A network of accounts generated from Cambodia that included a scam in the translation of comments between Urdu and English, which people who were part of a scam, which people who preferred or review or review or review) Works in exchange for non -learn. -Prust commission, to access victims need to participate with their own money.
Development comes when the AI tool is being used by poor actors to facilitate cyber-s) disintegration campaigns and other malicious operations.
Last month, the Google Threat Intelligence Group (GTIG) revealed that actors with more than 57 different threats with relations with China, Iran, North Korea and Russia have improved several stages of the attack and topical events. Use your Gemini AI Chatbot to conduct research, or, or do research in topical events, or to build material, translate and localization.
OpenEE said, “AI companies that can shine from unique insight threat actors are particularly valuable if they are shared with upstream providers, such as hosting and software developers, downstream distribution platforms, such as social media companies And open-source researchers, “Openai said.
“Equally, upstream and downstream providers and researchers have threatened that an actors have opened new avenues of detection and enforcement for AI companies.”