IT Brief US - Technology news for CIOs & IT decision-makers
Story image

Cloud Security Alliance unveils new AI governance report

Yesterday

The Cloud Security Alliance (CSA) has announced the release of its latest report, AI Organizational Responsibilities: AI Tools and Applications, aimed at guiding organisations in the efficient and responsible implementation of Artificial Intelligence (AI) technologies.

This report is the third in a series addressing AI organisational responsibilities, with a primary focus on the practical application of AI within organisations. It explores various tools, applications, and supply chains required for the successful deployment of AI-driven systems.

Ken Huang, Co-chair of the AI Organizational Responsibilities Working Group and a lead author of the report, noted, "By focusing on the practical aspects of AI adoption and management such as those covered in this report, we are equipping organisations with the essential knowledge and strategies they will need to adopt and manage AI effectively and responsibly, while also addressing the practical challenges of this fast-changing technological landscape."

The paper provides structured frameworks and practical guidance in critical areas of AI governance and management. Key areas include the security challenges of Large Language Models (LLMs) and Generative AI (GenAI) applications, management of third-party and supply chains, and broader organisational implications of AI adoption.

Each of these areas is elaborated on through six analytical lenses: evaluation criteria, responsibility using the RACI Model, high-level implementation strategies, continuous monitoring and reporting, access control mapping, and adherence to AI standards and best practices. This approach offers organisations a comprehensive guide for implementing and managing AI systems.

Michael Roza, Co-chair of the Top Threats Working Group and also a lead author of the report, commented, "As AI technologies evolve and their adoption expands across industries, the need for strong governance, security protocols, and ethical considerations becomes increasingly critical. Organisations must remain vigilant, keeping up with emerging AI regulations, evolving best practices, and emerging security threats unique to AI systems."

The AI Organizational Responsibilities Working Group is committed to setting industry standards for security team roles and responsibilities, adapting them to the challenges and opportunities presented by AI technologies. The group also aims to identify necessary shifts in tasks and knowledge across security sub-teams, such as product security and detection and response teams.

The report is available for download, and those interested in further exploring AI governance can refer to two previous papers in the series: AI Organizational Responsibilities: Core Security Responsibilities and AI Organizational Responsibilities: Governance, Risk Management, Compliance, and Cultural Aspects.

The CSA continues to be at the forefront of defining standards and raising awareness of best practices to ensure secure cloud computing, engaging industry practitioners, associations, governments, and members worldwide in its mission.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X