Google reveals that hackers are using AI Gemini to enhance their attack capabilities.

45/68 Monday, February 3, 2025

Google has released its latest report revealing that Advanced Persistent Threat (APT) groups, or state-sponsored hacker groups from multiple countries, are experimenting with Gemini, Google’s AI assistant, to enhance their cyber operations. These groups are not using AI for direct attacks but rather as a tool to assist in code development, vulnerability research, and operational planning. Google has identified that APTs from Iran, China, North Korea, and Russia are utilizing Gemini for different purposes:

  • Iranian APT: Uses Gemini to survey defense organizations, research publicly disclosed vulnerabilities, develop phishing campaigns, and generate content for information operations.
  • Chinese APT: Focuses on reconnaissance of U.S. military and government organizations, vulnerability research, writing scripts for lateral movement, and developing evasion techniques.
  • North Korean APT: Uses Gemini to find free hosting providers, survey targets, and assist in developing malware techniques. Additionally, AI is used to generate fake documents, such as job applications, to infiltrate Western companies.
  • Russian APT: Has limited use of Gemini, primarily leveraging it for translation assistance and generating more complex code.

Google has observed that malicious actors have attempted to bypass Gemini’s security measures using jailbreaking and prompt injection techniques. However, these attempts have not been successful. Nevertheless, the AI market still includes models with weaker safeguards, which could be easily misused. Cyber intelligence firm KELA has reported that AI models like DeepSeek R1 and Qwen 2.5 by Alibaba have weaker security measures, making them more accessible for hacker groups to support their attacks. Similarly, Unit 42’s research indicates that such AI models can be adapted for effective cyberattacks.

This reflects a growing trend in which APT groups are leveraging AI to enhance their cyber capabilities. While there is no direct evidence that AI is being used for attacks, its role in aiding code development and vulnerability research significantly improves hackers’ ability to breach systems and evade detection. At the same time, AI models with weaker protections pose an increasing risk of being exploited for malicious purposes.

Source https://www.bleepingcomputer.com/news/security/google-says-hackers-abuse-gemini-ai-to-empower-their-attacks/