Anthropic's AI agents spark debate over business risk
Anthropic has expanded its push into AI agents and advanced models as regulators and enterprises reassess the risks and rewards of automation. The moves have drawn scrutiny from US policymakers and sparked debate among technology and security specialists.
Its launch of Agent Harness, which connects Claude AI models with tools such as Notion and Anthropic's own API, has become an early test of how far organisations should trust software agents to act on their behalf. The product lets non-technical teams assemble and deploy workflow-style agents that can read information, make decisions, and trigger actions across different systems.
Danny Bluestone, chief executive of consultancy Ducks House, said the shift from AI as assistant to AI as actor changes the risk profile for many businesses. In his view, these tools lower the barrier to building complex software behaviour while increasing the likelihood of failures that are harder to predict and control.
"Anthropic's Agent Harness, as demo'd with Notion and Claude API, puts powerful software creation into non-technical hands. That's transformative, but risky.
"Like autopilot in aviation, it works until complexity hits. Without a deep understanding of user experience, systems design, and real-world constraints, teams can deploy flawed journeys or unstable infrastructure at speed.
"The upside is huge, but only for organisations with experienced operators who can define requirements properly, enforce standards, and debug fast when things break. In the wrong hands, this power can quickly derail products," Bluestone said.
Bluestone has spent more than two decades designing digital products and services for organisations including the Bank of England, the UK government, and Cancer Research UK. He now advises leadership teams on how AI affects customer journeys and operations, and argues that automation typically amplifies existing strengths and weaknesses rather than fixing them.
His comments come as Anthropic faces intense attention in Washington over a separate large model called Mythos, which US officials are reportedly assessing for national security and cybersecurity implications. That scrutiny has highlighted concerns that advanced AI systems could both defend networks and enable more sophisticated attacks.
Madhukar Irvathraya, co-founder and managing partner of risk consultancy Oraczen, said the policy debate risks overlooking a growing divide between organisations that use AI effectively and those that do not. He described it as a structural shift in how resilience and exposure are distributed across the economy.
"There's no question the caution from US authorities is warranted. The pace of AI advancement, particularly in cybersecurity, is unprecedented. But the narrative cannot stop at risk alone. Beneath the headlines, a deeper structural shift is underway: the emergence of a two-tier landscape of AI 'haves' and 'have-nots'," Irvathraya said.
"On one side are AI-native organizations operating at dramatically higher speed, building capabilities that can both defend and disrupt. On the other, many enterprises remain constrained by legacy systems and incremental change, struggling to keep up. That widening gap is where the real risk lies," he said.
He said AI now underpins both offensive and defensive cybersecurity strategies, and that deployment decisions inside companies will matter as much as regulation at the national level.
"In this context, AI is not just a source of new threats, it is also the most powerful defense enabler available. It can augment security teams, accelerate threat detection, and identify vulnerabilities at a scale previously unimaginable. But those benefits depend on deliberate, disciplined adoption," Irvathraya said.
He warned that ad hoc experiments could leave organisations more exposed even as rivals standardise and industrialise their use of AI.
"The key question is not whether enterprises adopt AI, but how. Fragmented, experimental use will only widen the divide. Structured, controlled, and observable AI deployment can help close it," he said.
He said businesses and regulators should focus on practical governance rather than extremes of optimism or alarmism.
"The path forward should be shaped not by fear or hype, but by responsible, enterprise-grade adoption, because in a world of AI-driven threats, only AI-enabled, resilient organizations will be able to keep pace," he said.