AI Models Become More Risk-Averse When Prompted to Act as Women, Study Finds
Summary
New research reveals AI models from DeepSeek and Google become significantly more risk-averse when prompted to act as women, potentially perpetuating economic disparities in loan approvals and investment decisions by reinforcing gender biases in financial decision-making.
Key Points
- New research from Allameh Tabataba'i University reveals that AI models from companies like DeepSeek and Google become significantly more risk-averse when prompted to act as women, mirroring human gender patterns in financial decision-making
- DeepSeek Reasoner and Google's Gemini 2.0 Flash-Lite show the most pronounced effects, consistently choosing safer options when given female prompts, while OpenAI's GPT models remain largely unaffected by gender cues
- Researchers warn these behavioral shifts could perpetuate economic disparities in high-stakes applications like loan approvals and investment advice, as AI systems subtly reinforce societal biases without users realizing the manipulation is occurring