ChaptersEventsBlog

Download Publication

CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

CSA Large Language Model (LLM) Threats Taxonomy

Release Date: 06/10/2024

Working Group: AI Safety

This document aims to align the industry by defining key terms related to Large Language Model (LLM) risks and threats. Establishing a common language reduces confusion, helps connect related concepts, and facilitates more precise dialogue across diverse groups. This common language will ultimately assist the advancement of Artificial Intelligence (AI) risk evaluation, AI control measures, and responsible AI governance. This taxonomy will also support additional research within the context of CSA’s AI Safety Initiative

Key Takeaways:
  • Define the assets that are essential for implementing and managing LLM/AI systems
  • Define the phases of the LLM lifecycle
  • Define potential LLM risks
  • Define the impact categories of LLM risks
Download this Resource

Bookmark
Share
Related resources
Data Security within AI Environments
Data Security within AI Environments
Introductory Guidance to AICM
Introductory Guidance to AICM
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for A...
The First Question Security Should Ask on AI Projects
The First Question Security Should Ask on AI Projects
Published: 01/09/2026
Securing the Future: AI Strategy Meets Cloud Security Operations
Securing the Future: AI Strategy Meets Cloud Security Operations
Published: 01/09/2026
Introducing the AI Maturity Model for Cybersecurity
Introducing the AI Maturity Model for Cybersecurity
Published: 01/08/2026
How to Build a Trustworthy AI Governance Roadmap Aligned with ISO 42001
How to Build a Trustworthy AI Governance Roadmap Aligned with ISO 4...
Published: 01/07/2026
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training