Length: 2 Days

SSDF for AI, GenAI & ML Systems Essentials Training by Tonex

Certified SysML Security Modeler (CSSM)

Build a defensible AI software supply chain by extending the Secure Software Development Framework (SSDF) into data-centric, model-centric, and pipeline-centric realities of AI, GenAI, and ML systems. This course bridges classic secure SDLC with MLOps, model registries, and deployment patterns to prevent data poisoning, model theft, prompt injection, and inference-time abuse. Impact on cybersecurity includes stronger provenance controls for models and datasets, tighter guardrails across CI/CD for AI artifacts, and measurable risk reduction against emerging LLM and model supply chain threats. Participants leave with actionable controls that map SSDF practices to modern AI lifecycle checkpoints, ready for immediate adoption.

Learning Objectives

  • Explain how SSDF maps to each phase of the AI and GenAI development lifecycle
  • Identify threats including data poisoning, model skew, prompt injection, and model exfiltration
  • Design controls for secure data collection, labeling, curation, and versioning
  • Apply model integrity, provenance, and SBOM/SBOLM techniques in practice
  • Operationalize SSDF-aligned controls within MLOps pipelines and runtime
  • Strengthen governance and compliance posture where cybersecurity risk is explicitly managed across AI systems

Audience

  • Cybersecurity Professionals
  • AI/ML Engineers and MLOps Practitioners
  • Software Security Architects
  • Data Engineers and Data Stewards
  • Product Managers and Technical Program Managers
  • Compliance, Risk, and Governance Leads

Course Modules

Module 1 – SSDF vs AI Development Lifecycle

  • Contrast SSDF and AI lifecycle checkpoints
  • Threat modeling for data, model, and prompts
  • Security requirements and acceptance criteria
  • Governance of datasets, features, and models
  • Security gates for model promotion and release
  • Metrics and evidence for control effectiveness

Module 2 – Secure Data Collection & Labeling

  • Data source trustworthiness and chain of custody
  • PII minimization, consent, and policy enforcement
  • Labeling quality, drift, and adversarial labeling checks
  • Versioning datasets, features, and annotations
  • Secure data pipelines, access, and encryption
  • Data poisoning detection and remediation playbooks

Module 3 – Model Integrity & Provenance

  • Model signing, attestation, and verification
  • SBOM for models and SBOLM concepts
  • Reproducible training and deterministic packaging
  • Registry controls, quarantine, and trust policies
  • Monitoring for theft, tampering, and drift
  • Key management and secure artifact distribution

Module 4 – Securing AI Pipelines (MLOps)

  • Hardened CI/CD for data, features, and models
  • Supply chain scanning for AI dependencies
  • Secret management and isolated build stages
  • Runtime controls, admission, and policy enforcement
  • Shadow deployments, canaries, and rollback safety
  • Auditability, traceability, and continuous compliance

Module 5 – GenAI Threats & Controls

  • Prompt injection, jailbreaking, and guardrail patterns
  • Data leakage via outputs and embeddings
  • Safety policies, red teaming, and evaluation
  • RAG-specific risks and retrieval hardening
  • Tenant isolation, quota, and abuse prevention
  • Incident response tailored to LLM applications

Module 6 – Mapping SSDF to OWASP LLM Top 10

  • Crosswalk SSDF practices to LLM risks
  • Prioritize controls by likelihood and impact
  • Design verification steps and test evidence
  • Operational runbooks and ownership models
  • KPIs, SLAs, and policy enforcement mapping
  • Roadmap for incremental enterprise adoption

Elevate your AI security posture and compliance readiness—enroll today to operationalize SSDF across your AI, GenAI, and ML systems and ship trustworthy, resilient solutions.

Request More Information