Certified AI Guidelines Auditor (ML, LLM & Agentic) – CAG-A Certification Program by Tonex
![]()
CAG-A certifies professionals who audit AI systems against enterprise policies and external frameworks across machine learning, large language models, and agentic workflows. Participants learn how to design evidence-based test plans, verify controls, and evaluate governance from data through deployment. The program emphasizes practical audit techniques, defensible sampling, traceable findings, and continuous assurance patterns that scale. Cybersecurity is a core thread, strengthening protection against data leakage, model abuse, prompt exploitation, and adversarial behavior. Graduates help organizations meet regulatory expectations while improving resilience, transparency, and trust in AI systems.
Learning Objectives
- Understand AI audit scope, roles, and lifecycle
- Apply frameworks as criteria including NIST, ISO, EU AI Act
- Plan and execute tests for data, models, and monitoring
- Evaluate governance for LLM, RAG, and agents
- Produce risk-rated findings and remediation roadmaps
- Design continuous control monitoring and KPIs
- Strengthen cybersecurity by reducing leakage, abuse, and misuse
Audience
- IT Auditors and Assurance Professionals
- Cybersecurity Professionals
- Risk and Compliance Officers
- ML and Data Engineers
- AI Product and Platform Owners
- Governance, Risk, and Compliance Managers
- Internal and External Consultants
Program Modules
Module 1: AI audit foundations and criteria
- Assurance scope definition
- Control objectives mapping
- Criteria and testable statements
- Risk taxonomy alignment
- Evidence strategies and sampling
- Roles, RACI, and workflows
Module 2: Auditing machine learning lifecycle controls
- Data quality and lineage
- Model training reproducibility
- Validation and performance drift
- Monitoring and alert thresholds
- Bias and fairness checks
- Change and release control
Module 3: LLM and RAG system assurance
- Prompt governance and policy
- Grounding and retrieval checks
- Hallucination and safety testing
- Data privacy and redaction
- Output logging and traceability
- Abuse and jailbreak defenses
Module 4: Agentic AI governance and safety
- Capability and risk mapping
- Tool permission boundaries
- Sandboxing and isolation
- Human-in-the-loop controls
- Autonomy limits and failsafes
- Evasion and misuse scenarios
Module 5: Evidence reporting and remediation planning
- Finding structure and ranking
- Root cause and impact rating
- Recommendations and quick wins
- Ownership and due dates
- Risk acceptance and exceptions
- Board-ready executive summaries
Module 6: Continuous assurance and control automation
- CCM metrics and telemetry
- Policy-as-code and guardrails
- Automated attestations
- Ticketing and workflow hooks
- Dashboards and trend analysis
- Periodic re-audit triggers
Exam Domains
- Foundations of AI Audit Practice
- Controls Testing for ML Systems
- Governance of LLM and RAG
- Risk and Safety in Agentic AI
- Reporting and Remediation Strategies
- Continuous Assurance and Control Design
Course Delivery
The course is delivered through a combination of lectures, interactive discussions, workshops, and project-based learning, facilitated by experts in AI assurance. Participants gain access to curated online resources, case studies, and practical tools to reinforce skill development and on-the-job application.
Assessment and Certification
Participants are assessed through quizzes, assignments, and a capstone project aligned to an end-to-end AI audit. Upon successful completion and passing the certification exam, participants receive the Certified AI Guidelines Auditor credential from Tonex.
Question Types
- Multiple Choice Questions (MCQs)
- Scenario-based Questions
Passing Criteria
To pass the Certified AI Guidelines Auditor Certification Training exam, candidates must achieve a score of 70% or higher. The short written scenario-based response is graded separately and must meet minimum competency standards.
Ready to validate your expertise in auditing ML, LLM, and agentic AI systems Join the CAG-A program by Tonex and become the trusted voice for accountable, secure, and compliant AI.
