AI Systems Vulnerable to Jailbreaks, Unsafe Code Generation, and Data Theft
New reports reveal major AI systems like ChatGPT, Claude, and Copilot are vulnerable to jailbreak attacks that bypass safety guardrails, enabling generation of malicious content, code, and data theft, raising urgent security concerns around the rapid deployment of generative AI.