Loading Now

Google intercepts a massive cyberattack powered by first-ever AI-generated zero-day exploit

Google intercepts a massive cyberattack powered by first-ever AI-generated zero-day exploit

Google intercepts a massive cyberattack powered by first-ever AI-generated zero-day exploit


Google says it may have prevented a major cyberattack campaign involving a zero-day exploit developed with the help of AI. The company revealed in a new report that threat actors were preparing to use the exploit in a “mass exploitation event” before Google intervened.

Also Read | Apple finally secures iPhone-to-Android chats with encrypted texts in iOS 26.5

In a report published by the Google Threat Intelligence Group (GTIG), the company detailed how hackers used AI to develop a previously unknown vulnerability capable of bypassing two-factor authentication (2FA) in a “popular open-source, web-based system administration tool,” which Google did not name.

Google said it worked with the affected vendor to disclose the flaw before it could be widely exploited, potentially disrupting the planned attacks.

“Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” GTIG wrote in its blog post.

The company said the exploit showed several signs commonly associated with AI-generated code, including unusually detailed educational-style comments, structured formatting, and even a hallucinated CVSS security score inside the Python script.

Google noted that the flaw stemmed from a “high-level semantic logic flaw,” something AI models are increasingly capable of identifying because they can use contextual reasoning to interpret a developer’s intent rather than simply scanning for crashes or malformed inputs.

Chinese and North Korean hackers using AI for vulnerability research

The report also noted that Chinese and North Korean threat actors have increasingly been using AI for vulnerability discovery, exploit development, and automated testing.

In one example, Google said it observed threat actors using expert-style prompts to make AI models behave like embedded-device security auditors while analysing router firmware and file transfer protocol implementations.

“You are currently a network security expert specializing in embedded devices, specifically routers. I am currently researching a certain embedded device, and I have extracted its file system. I am auditing it for pre-authentication remote code execution (RCE) vulnerabilities,” Google quoted as an example of prompts used by attackers.

The company also said threat actors had begun experimenting with a specialised vulnerability repository hosted on GitHub known as “wooyun-legacy.” The project operates as a Claude Code skill plugin loaded with more than 85,000 real-world vulnerability cases collected from a Chinese bug bounty platform and was allegedly used by attackers to improve exploit discovery.

Also Read | AI jobs growing almost by 15-20 per cent: Vaishnaw

“By priming the model with vulnerability data, it facilitates in-context learning to steer the model to approach code analysis like a seasoned expert and identify logic flaws that the base model might otherwise fail to prioritize,” Google wrote.

“As the generative AI landscape matures, the methods by which threat actors procure and operationalize these models have shifted from simple experimentation to industrial-scale consumption,” the company added.

The new Google report comes at a time when there has been a growing awareness about the risks posed by AI systems with Anthropic famously delaying the launch of its Mythos model to public because of the risk of misuse.

Post Comment