
Comply with ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Researchers acquired ChatGPT’s Deep Analysis agent to go rogue.
- ChatGPT’s Deep Analysis leaked e-mail information
- OpenAI has since plugged the vulnerability.
Whereas AI agents present promise in bringing AI help to the following stage by finishing up duties for customers, that autonomy additionally unleashes a complete new set of dangers. Cybersecurity firm Radware, as first reported by The Verge, determined to check OpenAI’s Deep Research agent for these dangers — and the outcomes have been alarming.
Additionally: OpenAI’s Deep Research has more fact-finding stamina than you, but it’s still wrong half the time
Within the assault, codenamed ShadowLeak, Radware planted a social engineering e-mail into the sufferer’s inbox that, whereas trying harmless, contained directions to search for delicate data within the inbox and share it with an attacker-controlled server. This can be a sort of prompt injection try. The concept was that when an AI agent comes throughout the e-mail, it could adjust to the hidden directions — which is strictly what ChatGPT did.
The Deep Analysis agent is supposed to parse by means of many sources to create a report for you, together with your individual information sources that you just join it to, similar to your Gmail inbox. Consequently, some useful use circumstances for Deep Analysis contain trying by means of your inbox and providing you with a abstract of the day’s occasions.
Nonetheless, when requested to do an exercise like that, it has to scan your emails, the place it could come throughout a malicious immediate just like the one above.
Additionally: I got 4 years of product development done in 4 days for $200, and I’m still stunned
On this case, the agent was prompted with: “I would like you to do deep analysis on my emails from at this time. I would like you to learn and test each supply which may provide details about my new worker course of,” and adopted the directions with out asking for consumer affirmation or making them seen within the UI.
“When Deep Analysis processes the mailbox, it reads each the HR e-mail and the attacker e-mail,” Radware defined within the research abstract. “It then initiates entry to the attacker area and injects the PII” — private figuring out data — “into the URL as directed.”
Different areas in danger
The researchers flag that whereas this try solely focused Gmail, additionally it is potential to attach ChatGPT’s Deep Analysis to different file repositories, together with GitHub, Google Drive, Field, Dropbox, Google Calendar, and extra, which may presumably be topic to the identical assault. Nonetheless, for the reason that preliminary publishing, Radware mentioned in its report that OpenAI acknowledged the vulnerability and marked it as resolved.
Additionally: Upgraded to iOS 26? Watch out for this AI feature
As we gear up for an agent-first world, extra firms are releasing protections to make sure that customers can reap the benefits of the added help with out compromising security. Google launched a new Agent Payments Protocol (AP2) meant to assist firms securely automate transactions, prepping for an economic system wherein an AI agent can place orders in your behalf, whereas Perplexity partnered with 1Password to guard customers’ credentials by protecting them encrypted each step of the way in which, at the same time as its Comet browser performs duties for them.





