A new issue was found in OpenAI’s ChatGPT Atlas browser, and researchers warn it could allow attackers to secretly plant malicious commands that remain active long after a user closes the app.
The flaw, uncovered by cybersecurity firm LayerX Security, makes it possible for hackers to inject instructions into the AI assistant’s persistent memory, effectively turning one of Atlas’s most innovative features into a backdoor for exploitation.
Also read: OpenAI Launches Atlas — A ChatGPT-Powered Browser Aiming to Challenge Google Chrome
Also read: OpenAI Buys Sky, a Mac-Based AI Interface Startup
“This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” said Or Eshed, co-founder and CEO of LayerX, in a report shared with The Hacker News.
How the Attack Works
At the heart of the issue is a cross-site request forgery (CSRF) weakness. It allows a malicious webpage to slip hidden instructions into ChatGPT’s memory when a victim is logged in.
Once the “tainted” memory is stored, it stays there — even across devices, browsers, or new sessions. The next time the user interacts with ChatGPT, the injected commands can quietly execute in the background.
“What makes this exploit so dangerous is that it doesn’t just live in your browser session,” explained Michelle Levy, LayerX’s head of security research. “It goes after the AI’s persistent memory, which survives across everything you use.”
In tests, researchers found that once ChatGPT’s memory was compromised, even innocent prompts could trigger code downloads, data theft, or privilege escalation — without any visible warning to the user.
Why It’s Especially Risky
OpenAI introduced memory for ChatGPT in early 2024 to make conversations more personal and consistent — remembering names, preferences, and projects across chats. But that same convenience now creates an opening for attackers.
LayerX’s report warns that tainted memories could survive indefinitely unless users manually delete them from settings. “A helpful feature becomes a persistent weapon,” the researchers wrote.
Adding to the problem, Atlas reportedly lacks the phishing and threat detection safeguards built into traditional browsers. In LayerX’s testing of more than 100 live attack scenarios:
- Microsoft Edge blocked 53% of threats
- Google Chrome stopped 47%
- ChatGPT Atlas caught just 5.8%
That gap, researchers say, leaves Atlas users up to 90% more vulnerable than people browsing with mainstream browsers.
A Wider AI Security Problem
The Atlas exploit is the latest in a growing list of AI-powered software vulnerabilities. Earlier this month, NeuralTrust demonstrated that Atlas could be tricked into executing malicious commands disguised as URLs.
Security experts say these incidents reveal a new kind of risk: AI platforms that blend browsing, identity, and automation into a single system.
“AI browsers are becoming the new supply chain,” Eshed warned. “They travel with the user, contaminate future work, and blur the line between automation and control.”
As enterprises increasingly rely on AI tools to write code, process data, and automate tasks, a single infected memory could potentially spread through work environments, exposing sensitive information or altering workflows.
The Bigger Picture
The discovery highlights a hard truth about today’s AI systems: as they become more autonomous, they also become more exploitable.
If attackers can write to an AI’s long-term memory, they don’t just hijack a session — they hijack its future behaviour. It’s a subtle but powerful form of compromise, one that cybersecurity experts warn could redefine what “persistent threat” means in the AI era.
For now, researchers advise users to manually clear ChatGPT’s memory and avoid clicking unknown links while logged into Atlas. But until OpenAI rolls out a fix, the safest approach might be the simplest: stick to a traditional browser.