This curriculum spans the breadth of a multi-phase AI security engagement, covering the same technical, governance, and lifecycle controls applied in enterprise-scale deployments from development through decommissioning.
Module 1: Risk Assessment and Threat Modeling in AI Systems
- Selecting threat modeling frameworks (e.g., STRIDE vs. PASTA) based on organizational maturity and AI deployment scope.
- Mapping data flows in AI pipelines to identify high-risk attack surfaces such as model training data ingestion and inference endpoints.
- Conducting adversarial risk assessments to evaluate model susceptibility to evasion, poisoning, and model inversion attacks.
- Integrating threat intelligence feeds to prioritize risks based on real-world exploit trends targeting machine learning components.
- Defining risk tolerance thresholds for AI-driven decisions in regulated domains like healthcare or finance.
- Documenting risk treatment plans that specify mitigation ownership, timelines, and residual risk acceptance criteria.
- Aligning threat modeling outputs with enterprise risk management frameworks such as NIST RMF or ISO 31000.
Module 2: Secure Data Governance for AI Development
- Establishing data classification policies that distinguish between sensitive training data, synthetic data, and public datasets.
- Implementing role-based access controls (RBAC) for data scientists, MLOps engineers, and auditors accessing AI data repositories.
- Designing data lineage tracking to maintain auditability from raw input data through preprocessing and model training.
- Enforcing data minimization principles during feature engineering to reduce exposure of personally identifiable information (PII).
- Integrating data retention schedules with AI model retraining cycles to ensure compliance with data sovereignty laws.
- Deploying tokenization or pseudonymization techniques for datasets used in cross-border AI development teams.
- Validating third-party data providers for compliance with contractual data protection clauses and audit rights.
Module 3: Secure Model Development and Training Practices
- Selecting secure development environments (e.g., isolated containers or air-gapped sandboxes) for training models on sensitive data.
- Implementing integrity checks on training datasets to detect tampering or unauthorized modifications prior to model training.
- Configuring model training pipelines to prevent accidental logging or caching of sensitive data in intermediate artifacts.
- Applying differential privacy techniques during training when working with datasets containing personal or regulated information.
- Enforcing code review and peer validation processes for custom model architectures and training scripts.
- Using cryptographically signed model checkpoints to ensure provenance and prevent model substitution attacks.
- Restricting GPU/TPU cluster access to authorized personnel and monitoring for unauthorized compute usage.
Module 4: Model Deployment and Inference Security
- Hardening inference endpoints using mutual TLS (mTLS) and API gateways with rate limiting and authentication.
- Isolating inference workloads in virtual private clouds (VPCs) with strict egress filtering to prevent data exfiltration.
- Implementing input validation and sanitization layers to defend against adversarial inputs and prompt injection attacks.
- Encrypting model payloads in transit and at rest, including serialized models stored in model registries.
- Monitoring inference request patterns for anomalies indicating model scraping or denial-of-service attempts.
- Applying model watermarking techniques to detect unauthorized redistribution or use of proprietary models.
- Configuring auto-scaling groups to maintain security posture under variable inference loads without exposing debug interfaces.
Module 5: Monitoring, Logging, and Incident Response for AI Systems
- Designing centralized logging strategies that capture model inputs, outputs, and metadata without violating privacy policies.
- Deploying anomaly detection on model performance metrics to identify potential data drift or adversarial manipulation.
- Integrating AI system logs with SIEM platforms for correlation with broader enterprise security events.
- Establishing incident playbooks specific to AI incidents such as model hijacking, data leakage via outputs, or bias escalation.
- Conducting red team exercises to test detection and response capabilities for AI-specific attack vectors.
- Defining thresholds for automated model rollback based on confidence score degradation or outlier detection.
- Ensuring log retention periods align with regulatory requirements for algorithmic decision-making transparency.
Module 6: Regulatory Compliance and Audit Readiness
- Mapping AI system components to GDPR, CCPA, or HIPAA requirements based on data processing activities.
- Preparing documentation for Data Protection Impact Assessments (DPIAs) involving automated decision-making.
- Implementing audit trails that support explainability requests from data subjects or regulators.
- Conducting third-party audits of AI systems using standardized frameworks like ISO/IEC 27001 or SOC 2.
- Managing cross-jurisdictional compliance when deploying AI models across multiple legal territories.
- Responding to regulatory inquiries by producing version-controlled records of model training, testing, and deployment.
- Updating compliance documentation in response to model retraining or significant data source changes.
Module 7: Supply Chain and Third-Party Risk Management
- Evaluating third-party AI platforms (e.g., cloud ML services) for compliance with organizational security baselines.
- Conducting security assessments of open-source ML libraries for vulnerabilities and license risks.
- Negotiating data processing agreements (DPAs) with vendors handling training data or model inference.
- Implementing software bill of materials (SBOM) tracking for AI model dependencies and container images.
- Monitoring public vulnerability databases (e.g., CVE) for exploits affecting ML frameworks like TensorFlow or PyTorch.
- Restricting use of pre-trained models from unverified sources due to potential backdoors or data contamination.
- Establishing vendor offboarding procedures that include model deprovisioning and data deletion verification.
Module 8: Ethical AI and Bias Mitigation as a Security Control
- Integrating bias testing into model validation pipelines using statistical fairness metrics across demographic groups.
- Documenting known biases and limitations in model cards for internal governance and external transparency.
- Implementing access controls to prevent unauthorized modification of fairness constraints in production models.
- Using adversarial debiasing techniques during training when models are used in high-stakes decision contexts.
- Establishing review boards to evaluate ethical risks of AI deployments in sensitive domains like hiring or lending.
- Logging model decisions in a way that supports retrospective bias audits without compromising user privacy.
- Designing feedback loops that allow stakeholders to report perceived bias for investigation and model improvement.
Module 9: Secure AI Lifecycle Management and Decommissioning
- Defining end-of-life criteria for AI models based on performance decay, data relevance, or regulatory changes.
- Executing secure model decommissioning procedures including API deactivation and access revocation.
- Archiving model artifacts and training data in encrypted, access-controlled repositories for legal retention.
- Verifying deletion of model copies across development, staging, and disaster recovery environments.
- Updating dependency maps to reflect retired models and prevent accidental reuse in new pipelines.
- Conducting post-mortem reviews to capture security lessons from decommissioned AI systems.
- Notifying stakeholders and integrating decommissioning events into organizational change management logs.