IT Brief US - Technology news for CIOs & IT decision-makers
Business office workers computers large question mark ai security risk uncertainty policy

Over half of firms lack clear AI policy as risks mount, survey finds

Yesterday

More than half of businesses currently have no formal artificial intelligence (AI) policy in place to guide employees and mitigate risks, according to research by WorkNest.

A survey of over 500 employers found that 54% of organisations lack any AI policy, while a further 24% are in the process of developing one. Only 13% of respondents indicated they have clear, documented rules regarding the use of AI tools at work.

Growing use, unclear guidance

The research results indicate many employees are already using AI tools such as ChatGPT, but companies are struggling to develop guidance outlining acceptable boundaries, responsibilities, and oversight. Nearly half of organisations surveyed reported no formal stance on access to AI tools, reflecting uncertainty and inconsistency in their approach.

Among those who have introduced some measures, 38% said access to AI was restricted, and 17% permitted fully open access. A minority of organisations, representing only 0.7%, reported banning AI tools entirely.

Key concerns identified

Asked about their biggest concerns regarding workplace AI use, 41% of employers cited data protection and privacy risks as their top issue. Misinformation and inaccurate outputs worried 30% of respondents, while 16% pointed to legal and compliance challenges. Concerns about overreliance on AI were raised by 11%, and just 3% of those surveyed said they had no major concerns about AI in the workplace.

The findings from the survey, which covered 505 HR professionals and employers, highlight the speed at which AI is being adopted versus organisations' ability to implement safeguards and clear usage rules.

Ownership and responsibility

Uncertainty also exists over who should take ownership for AI governance within organisations. Almost half (47%) of survey participants felt that senior leadership should establish rules, but over one in five (23%) admitted their organisations had no designated individual or team responsible for AI guidance or governance.

Legal and HR implications

Employers remain fully responsible for all workplace decisions influenced by AI, even when using third-party tools. This means that, should AI-driven decisions result in discrimination or other breaches, intentional or not, the business, not the technology provider, could face employment tribunal claims and substantial financial consequences.

Alice Brackenridge, Employment Law Advisor at WorkNest, said, "Without robust processes to monitor, regulate, and review AI outputs, which include conducting regular equality and bias assessments, organisations may inadvertently expose themselves to avoidable and costly legal challenges."

She said businesses can not wait until something goes wrong.

"Proactive steps need to be taken by a combination of senior leadership, HR, IT and legal teams, in order to set boundaries, establish policies and provide training. AI can deliver huge benefits, but only if it is managed responsibly and transparently."

Calls for policy and training

WorkNest recommends that organisations develop clear policies to govern the use of AI tools, conduct regular assessments for bias and equality impacts, and ensure employees receive appropriate training. Failure to do so could result in organisations facing challenges not only over data protection and compliance, but also legal disputes linked to employment law.

As businesses continue to adopt AI at pace, experts warn that formalising governance and responsibility for AI use has become increasingly important for managing risks and supporting accountability within organisations.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X