OpenAI Vulnerability Exposes Google Drive Data via ChatGPT Exploit
Summary
A vulnerability in OpenAI's Connectors allows an exploit called AgentFlayer to extract sensitive data like API keys from Google Drive accounts by tricking ChatGPT into searching and retrieving the information through a poisoned document shared with victims.
Key Points
- A weakness in OpenAI's Connectors allowed sensitive data to be extracted from a Google Drive account using an indirect prompt injection attack.
- Researchers demonstrated an attack called AgentFlayer that extracted developer secrets like API keys from a Google Drive account.
- The attack used a poisoned document shared with the victim to trick ChatGPT into searching for and extracting sensitive data from the Drive account.