Business leaders' mixed signals about AI use are creating workplace anxiety
Business leaders may be giving their people contradictory messages about artificial intelligence that risk undermining the technology's potential in the workplace, along with the confidence of their teams. While 93% of leaders say AI should be used in their team's work, more than half admit they're bothered when they actually notice employees using it. This disconnect is creating an "AI transparency paradox" – and it may be causing problems already.
The numbers from our recent survey of business leaders' attitudes to AI use paint a picture of enthusiastic adoption paired with discomfort with the reality it brings. More than seven in ten want employees to feel confident using AI to support their communication and believe they can detect when team members have used AI in their work, while at the same time 52% say it bothers them when they notice this usage.
It's easy to see how these mixed signals can cause confusion.
The shadow AI problem
Microsoft's recent research on "Shadow AI" tools in UK organizations reveals what happens when this paradox plays out. According to their findings, 71% of UK employees have used unapproved consumer AI tools at work, and 51% continue to do so every week. The data shows employees are turning to these tools for workplace communications, drafting materials, and finance-related tasks – often without their employer's knowledge or approval.
The reason is straightforward: employees report they use AI for its ease and familiarity, with 41% saying it's what they're used to in their personal life, and 28% reporting their company doesn't provide a work-approved option. Only 32% expressed concern about privacy when using these tools, and just 29% worried about their organization's IT security.
This represents a failure of transparency in both directions. Employees aren't disclosing their AI use because they sense it might be frowned upon, even as leaders claim to support it. Meanwhile, sensitive company and customer data flows through unprotected channels, creating security vulnerabilities and compliance risks that organizations may not even know exist.
When silence backfires
The stakes of this transparency gap became clear in October this year, when Deloitte agreed to refund part of a $440,000 consultancy fee to the Australian government. The firm had delivered a report on compliance frameworks that contained fabricated academic citations, false references, and a quote wrongly attributed to a Federal Court judgment. The problem wasn't that Deloitte used AI – it was that they failed to disclose it and didn't implement adequate verification processes.
Deloitte acknowledged using OpenAI's ChatGPT during early drafting, claiming human review refined the content and that substantive findings were unaffected. But the Australian government didn't see it that way when over a dozen fictitious references had to be removed or replaced. The incident led to suggestions that future government contracts might include stricter AI-usage clauses.
Academic Christopher Rudge characterized the errors as AI "hallucinations" – instances where generative models fill gaps or invent plausible but incorrect details. The Deloitte case highlighted a critical problem: when AI use isn't disclosed, there's no way to trace whether claims are backed by robust evidence or simply generated by a probabilistic machine.
The detectability trap
The belief that AI use is easily detectable creates its own set of problems. When 70% of leaders think they can tell when AI has been used, they're likely to make assumptions – sometimes wrong ones – about the authenticity of their team's work. This puts employees in a bind: use AI and risk being seen as less capable, or avoid it and fall behind more efficient colleagues.
Censuswide's research shows that 76% of leaders want team members to be comfortable communicating without AI tools, while 74% agree AI has an important function in workplace communication. This isn't hypocrisy. It reflects genuine uncertainty about how AI should fit into workplace relationships. But uncertainty from the top translates into anxiety at every other level.
The solution isn't to ban AI or pretend we can return to a pre-AI workplace. That ship has sailed. Instead, organizations need to bridge the transparency gap from both sides.
Moving Forward
First, leaders need to articulate clear policies about AI use. Not just what's permitted, but how it should be disclosed and what verification standards apply. The Deloitte incident shows that "AI was used during drafting" isn't enough. Organisations need to specify what level of human oversight is required for different types of work.
Second, leaders must examine their own reactions to AI use. If you're bothered when you detect AI in employee communications, ask yourself why. Is it because the work quality suffered? Because proper verification wasn't done? Or simply because AI was used at all? If it's the latter, that may be a problem with your expectations, not your employees' work habits.
Third, organizations need to provide access to approved AI tools and clear guidance on their use. The Shadow AI phenomenon exists because employees see value in these tools but lack proper channels to use them. Give people the tools they need within appropriate guardrails, rather than forcing them underground.
The AI transparency paradox is not going to resolve itself. As adoption accelerates – and our data suggests we're witnessing one of the fastest technology adoptions in modern workplace history – the gap between official support and actual acceptance will only grow more problematic. Organisations that address this now will build cultures where AI enhances human capability. Those that don't will find themselves with the worst of both worlds: security risks from Shadow AI and reduced productivity from employees too anxious to use the tools that could help them most.