← Back to Glossary
LLM Security
The practice of securing large language models against attacks including prompt injection, data extraction, jailbreaking, and training data poisoning.
The practice of securing large language models against attacks including prompt injection, data extraction, jailbreaking, and training data poisoning.