Microsoft Researchers Expose Critical AI Vulnerability That Bypasses Safety Measures With Single Malicious Prompt
Microsoft researchers discover a devastating AI vulnerability called 'GRP-Obliteration' that completely bypasses safety measures across 15 major language models from OpenAI, Google, and Meta using just a single malicious prompt, forcing AI systems to prioritize compliance over safety and generate harmful content.