Length: 2 Days
Print Friendly, PDF & Email

Advanced AI Security: Understanding and Mitigating Risks in LLM and GenAI Training by Tonex

AI Security Course

This intensive 2-day course is designed to provide technical professionals and senior architects with a comprehensive understanding of the security risks associated with Large Language Models (LLMs) and Generated Artificial Intelligence (GenAI). Participants will delve into the vulnerabilities inherent in LLMs and GenAI systems, explore threat modeling techniques, and gain practical insights into mitigation strategies. By demystifying the workings of LLMs and GenAI, attendees will be equipped with the knowledge and tools necessary to effectively address security challenges in AI-driven environments.

Learning Objectives:

  • Understand the unique security risks posed by LLMs and GenAI.
  • Learn techniques for threat modeling in LLM and GenAI systems.
  • Demystify the functioning of LLMs and identify associated threats.
  • Demystify the operation of GenAI systems and recognize potential threats.
  • Explore mitigation strategies for addressing LLM and GenAI vulnerabilities.
  • Gain proficiency in implementing OWASP Top 10 security practices in LLM environments.

Audience:

Technical professionals, including software engineers, cybersecurity specialists, and senior architects, who are involved in designing, developing, or securing AI systems. Participants should have a foundational understanding of artificial intelligence concepts and cybersecurity principles.

Course Outlines:

Day 1: Understanding LLM Vulnerabilities and Threat Modeling

  • Introduction to Large Language Models (LLMs)
  • Risks and security challenges in LLMs
  • Threat modeling methodologies for LLM systems
  • Identifying common vulnerabilities in LLM architectures
  • Case studies and real-world examples of LLM security incidents
  • Hands-on threat modeling exercises for LLM systems

Day 2: Exploring GenAI Threats and Mitigation Strategies

  • Overview of Generated Artificial Intelligence (GenAI)
  • Demystifying the operation of GenAI systems
  • Threat landscape for GenAI applications
  • Mitigation strategies for GenAI vulnerabilities
  • Implementation of OWASP Top 10 security practices in LLM environments
  • Best practices for securing AI-powered applications
  • Case studies and real-world examples of GenAI security incidents
  • Hands-on threat modeling exercises for GenAI systems

Delivery Format:

The course will be delivered through a combination of lectures, interactive discussions, hands-on exercises, and case studies. Participants will have the opportunity to engage with industry experts and collaborate with peers to deepen their understanding of AI security concepts and practices. Threat modeling exercises will be incorporated throughout the course to provide practical experience in assessing and mitigating security risks in LLM and GenAI systems.

  • Threats and risks associated with LLMs,
  • Prompt injection
  • Insecure Output Handling
  • Training Data Poisoning
  • Supply Chain vulnerabilities
  • Insecure Plugin Design
  • Overreliance
  • Model-Theft
  • Excessive Agency
  • Model Denial of Service
  • Leverage GenAI Security Best Practices & Frameworks

Case Study: Google Secure AI framework (SAIF)

  • Overview of Best Practices
  • Proactive threat detection and response for LLMs
  • Leveraging threat intelligence, and automating defenses against LLM threats.
  • Platform security controls to ensure consistency
  • Enforcing least privilege permissions for LLM usage and development.
  • Adaptation of application security controls to LLM-specific threats and risks
  • Feedback loop when deploying and releasing LLM applications.
  • Contextualize AI risks in surrounding business processes.

Workshop 1: AI Risk Management Program

  • Reduce AI Data Pipeline Attack Surface & LLM Data Validation
  • Protecting the AI data pipeline
  • Threat Management and Least Privilege
  • LLM Application Security
  • GenAI security controls
  • Targeting GenAI-associated risks and threats
  • Governance oversight

Workshop 2: Analyzing Threats and risks associated with LLMs/GenAI

  • Prompt injection
  • Insecure Output Handling
  • Training Data Poisoning
  • Supply Chain vulnerabilities
  • Insecure Plugin Design
  • Overreliance
  • Model-Theft
  • Excessive Agency
  • Model Denial of Service

Workshop 3: LLM/GenAI Threat Modeling Maps

  • Weakness and Vulnerability Analysis (WVA)
  • Categorizing Threats with STRIDE
  • STRIDE Threat Categorization
  • Categorizing Threats with DREAD
  • Process for Attack Simulation and Threat Analysis (PASTA)
  • Common Attack Pattern Enumeration and Classification (CAPEC)
  • Common Vulnerability Scoring System (CVSS)

Request More Information

Please enter contact information followed by your questions, comments and/or request(s):
  • Please complete the following form and a Tonex Training Specialist will contact you as soon as is possible.

    * Indicates required fields

  • This field is for validation purposes and should be left unchanged.

Request More Information

  • Please complete the following form and a Tonex Training Specialist will contact you as soon as is possible.

    * Indicates required fields

  • This field is for validation purposes and should be left unchanged.