← Back to Glossary
AI Jailbreaking
Techniques used to bypass the safety restrictions and content policies of AI systems, causing them to produce outputs they are designed to refuse.
Techniques used to bypass the safety restrictions and content policies of AI systems, causing them to produce outputs they are designed to refuse.