New 'Bad Likert Judge' AI Jailbreak Technique Boosts Malicious Response Success by Over 60%
Cybersecurity researchers have discovered a new AI jailbreak technique called 'Bad Likert Judge' that can boost the success rates of malicious prompts bypassing large language models' safety guardrails by over 60%, enabling the generation of potentially harmful or illegal content.