IT Brief US - Technology news for CIOs & IT decision-makers
Realistic hooded figure dark room laptop binary code data theft cyber threats

Critical Gemini flaws exposed risks of AI-enabled data theft

Thu, 2nd Oct 2025

Tenable researchers have identified three critical vulnerabilities in Google's Gemini AI suite, which could have enabled attackers to steal sensitive user data by manipulating the AI platform's behaviour.

The vulnerabilities, referred to as the "Gemini Trifecta", existed in three key components of the Gemini suite: Gemini Cloud Assist, Gemini Search Personalisation Model, and the Gemini Browsing Tool. Tenable reports that these flaws have now been remediated by Google, but their initial presence highlighted new avenues for attackers to exploit AI-driven systems.

Exposure routes

According to Tenable's research, each part of the Gemini suite exposed users to different types of risk. In the case of Gemini Cloud Assist, attackers had the ability to plant poisoned log entries. When users subsequently interacted with Gemini, the system could follow these malicious instructions without user knowledge.

With the Gemini Search Personalisation Model, attackers found a way to inject queries into a victim's browser history. Gemini, in turn, treated these injected entries as trusted context, potentially enabling the extraction of sensitive information such as location details and user data.

Finally, through the Gemini Browsing Tool, attackers could direct Gemini to make covert outbound requests that transmitted private user data to attacker-controlled servers. This method allowed sensitive data to leave the user's environment by leveraging Gemini's access and functionality, all without raising user suspicion.

In combination, these three vulnerabilities offered various attack channels through which user data could be secretly accessed and exfiltrated. Tenable reports that the Gemini Trifecta demonstrated that attackers did not need to rely on traditional threat vectors such as malware or phishing emails, as an AI platform itself could be transformed into the attack vehicle.

New risks in AI adoption

"Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs," said Liv Matan, Senior Security Researcher at Tenable.

Tenable's analysis indicates that the root cause of the problems lay in how Gemini's integrations treated all data inputs as equally trustworthy, regardless of origin. This flaw meant that poisoned logs, malicious search entries, or hidden web content could all be misinterpreted as safe and used to inform the platform's behaviour.

The potential consequences outlined by Tenable include: the silent insertion of malicious instructions, exfiltration of sensitive data (such as user memories and location history), abuse of cloud integrations to access wider resources, and tricking Gemini into sending data to attacker-controlled servers via its browsing tool.

Security recommendations

With Google having remediated the identified vulnerabilities, Tenable stresses the importance for security teams to view AI-driven features as active areas for ongoing threat assessment.

Tenable recommends that security professionals audit logs, search histories, and integrations regularly to detect any attempts at poisoning or manipulation. Additionally, they advise monitoring for unexpected tool executions or outbound requests that could indicate data exfiltration, and testing AI-enabled services for resilience against prompt injection attacks in order to strengthen defences proactively.

Liv Matan commented:

"The Gemini Trifecta shows how AI platforms can be manipulated in ways users never see, making data theft invisible and redefining the security challenges enterprises must prepare for. Like any powerful technology, large language models (LLMs) such as Gemini bring enormous value, but they remain susceptible to vulnerabilities. Security professionals must move decisively, locking down weaknesses before attackers can exploit them and building AI environments that are resilient by design, not by reaction. This isn't just about patching flaws; it's about redefining security for an AI-driven era where the platform itself can become the attack vehicle."

Matan also emphasised:

"This vulnerability disclosure underscores that securing AI isn't just about fixing individual flaws. It's about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defenses that prevent small cracks from becoming systemic exposures."

Google has addressed the vulnerabilities, and Tenable confirmed that no action is required from users as a result of the patches.