← Back to Glossary

Adversarial Example

Intentionally crafted inputs designed to cause machine learning models to make incorrect predictions while appearing normal to humans.

Related Terms