← Back to Glossary
LLM Guardrails
Safety mechanisms implemented around large language models to prevent harmful outputs, prompt injection, and data leakage.
Safety mechanisms implemented around large language models to prevent harmful outputs, prompt injection, and data leakage.