
Google Cloud unveils agentic AI to boost security operations efficiency
Google Cloud has outlined its plans to integrate agentic AI into its security operations in an effort to automate routine tasks and improve efficiency for security teams.
The use of agentic AI within security is intended to move beyond existing assistive AI by allowing intelligent agents to independently identify, reason through, and dynamically execute tasks, while keeping human analysts informed and involved in the process.
Building on customer experiences with Gemini in Security Operations, Google Cloud aims to develop a security operations centre (SOC) where these intelligent agents collaborate with human analysts. Hector Peña, Senior Information Security Director at Apex Fintech Solutions, commented on the current benefits, stating: "No longer do we have our analysts having to write regular expressions that could take anywhere from 30 minutes to an hour. Gemini can do it within a matter of seconds."
Google Cloud has recently developed new AI agents as part of its Gemini in Security suite. The alert triage agent in Google Security Operations is designed to perform dynamic investigations and deliver verdicts on alerts. This agent is expected to be available in preview to selected customers in the second quarter of 2025. It analyses the context of each alert, gathers supporting information, and provides an audit log detailing the evidence, reasoning, and decisions behind its verdicts. This tool aims to reduce repetitive work for Tier 1 and Tier 2 security analysts who manage high volumes of daily alerts.
In Google Threat Intelligence, the malware analysis agent is designed to undertake the reverse engineering of potentially malicious files. Also expected to be available for preview to selected customers in Q2 2025, this agent examines suspicious code, creates and executes deobfuscation scripts, and presents a summary along with a determining verdict regarding the file's safety.
The agentic SOC concept involves connecting multiple specialised agents that collaborate with analysts to automate a variety of security workflows. Google Cloud believes this could yield significant efficiency gains, enabling security professionals to dedicate more attention to complex threats and strategic priorities.
Google Cloud provided examples of critical SOC functions that could be automated or orchestrated through agentic AI. These include data management, alert triage, investigation, response actions, threat research, threat hunting, malware analysis, exposure management, and detection engineering.
To support the deployment of reliable AI agents, Google Cloud leverages its broad security data and expertise, advanced AI research, and integrated technology stack. The company stated that these resources allow for the development of agents capable of human-like planning and reasoning, producing consistent and high-quality outcomes across security tasks. Google also pointed to the modularity of this approach, with new agents constructed through the combination of existing security capabilities.
Interoperability is also a focus for Google Cloud, with the introduction of the Agent2Agent (A2A) protocol to enable communication among agents developed by different developers, and the model context protocol (MCP) for standardised interaction between AI and security applications.
Google Cloud is open-sourcing MCP servers for Google Unified Security, allowing customers to build custom workflows that combine Google Cloud and other security solutions. The company emphasises its commitment to an open ecosystem in which agents from various vendors and products can work together.
Grant Steiner, Principal Cyber-Intelligence Analyst, Enablement Operations, Emerson, said: "We see an immediate opportunity to use MCP with Gemini to connect with our array of custom and commercial tools. It can help us make ad-hoc execution of data gathering, data enrichment, and communication easier for our analysts as they use the Google Security Operations platform."
Google Cloud also introduced SecOps Labs, an initiative offering customers early access to AI pilots in Google Security Operations, and providing a mechanism for the community to give feedback. The initial set of pilots includes autonomous conversion of threat reports into detection rules, the generation of automation playbooks based on historical incident analysis, and updates to data parsers using natural language commands.
SecOps Labs is intended as a space for teams to trial and refine AI capabilities, and help shape future Google Security Operations technologies by offering feedback based on real-world experiences.