← Back to Glossary
AI Model Poisoning
Deliberately corrupting AI model training data or processes to introduce vulnerabilities or biases that can be exploited later.
Deliberately corrupting AI model training data or processes to introduce vulnerabilities or biases that can be exploited later.