P.2.6 Confirm Integrity of AI model data
Control Details
Objective
Evaluate AI foundation model, training, and fine-tuning data for potential security impacts.
Definition
Inputs to AI model training, evaluation, and the outputs of AI models should be evaluated for security risks, e.g. bias and data poisoning.
Assessment Questions
- What processes are used to evaluate AI models build or consumed in your SDLC, and how is the process similar or different to what is done for third-party components?
- How are AI models monitored to ensure they are regularly maintained?
- How do you track the provenance of your AI models and data?
Reference sources
- SSDF AI PW.3