The rise of jailbreaking attacks on Large Language Models (LLMs) presents a significant challenge to AI security, as malicious actors attempt to bypass safety