When we talk about cybersecurity, we usually imagine hackers outsmarting people. But what happens when it’s AI doing the clicking instead?
The new generation of AI browsers, like ChatGPT Atlas, Opera Neon, Perplexity Comet, and The Browser Company’s Dia, all promise to surf the web for you. They can read sites, follow links, fill out forms, and even make purchases. It’s an impressive glimpse of the future, until you realize one thing: humans always fall for scams, so what’s stopping these automated AI agents from falling for the same problems?
How AI browsers and agents fall for scams
It’s grim reading across the board for AI browsers
OpenAI’s Atlas browser was a long time coming and launched with a great deal of hype. But it didn’t take long for researchers to get stuck into the browser to find out how seriously OpenAI takes security.
The results weren’t encouraging. Security firm LayerX found two significant problems with Atlas within a few days of the browser’s launch.
One vulnerability focused on prompt injection, allowing for the injection of malicious instructions into ChatGPT’s memory feature. The instructions then allowed the execution of remote code, which is extremely dangerous. LayerX’s research also found that Atlas stopped just 5.8 percent of the malicious web pages it encountered. So, more than 90 percent of the time, it would interact with phishing pages instead of closing the tab and moving on.
In fairness to OpenAI and its Atlas browser, it was far from the only AI browser with agentic AI features to perform poorly in LayerX’s testing. Perplexity’s super-popular Comet AI browser only stopped 7 percent of phishing pages.
By comparison, Edge, Chrome, and Dia stopped significantly more, rejecting 53, 47, and 46 percent of attacks, respectively.
Scamlexity
Guard.io encountered a similar range of problems, although this was before the launch of ChatGPT Atlas. It’s Scamlexity study (great name) put “agentic AI browsers to the test — they clicked, they paid, they failed.”
Guard.io developed “PromptFix,” which it dubs “an AI-era take on the ClickFix scam,” designed to hide prompt injection attacks inside fake CAPTCHA screens. With invisible text embedded in the fake CAPTCHA, AI agents using automated browsing modes could be easily fooled into buying products, downloading files, and more.
Similarly, when it presented phishing emails to Perplexity’s Comet browser (after asking it to handle incoming emails), it immediately got stuck in, adding the user information to the fake pages. It even prompted the user to enter their credentials, declaring the page safe.
AI browsers on autopilot click the bad stuff
AI agents are susceptible to scams humans can’t even see
The big problem is that AI browsers and agentic AI browsing prompt us to switch off from what we’d normally be doing. You’re tasking an automated system to make decisions for you and by doing so, potentially missing important red flags that alert you to scams that the AI model cannot understand.
Right now, these attacks mostly happen in security labs, not in the wild. But the danger is real and growing fast. AI browsers don’t just view sites; they also handle logins, store cookies, and sometimes retain access to connected accounts. So, the potential for misuse is rife and clearly something that attackers are actively looking to exploit.
Part of the problem for regular folks is that we’re talking about scams that the human eye can’t always detect, even if you are paying attention to the screen. These secretive prompt injection attacks typically embed malicious commands out of view, and you don’t know what’s taken place until after the fact, when you’ve been scammed.
And where traditional browsers have alerts and pop-ups to warn when something doesn’t seem right, an AI agent browsing automatically may just see this as another challenge to overcome.
There is only one solution
Normal security tools aren’t always useful
Modern security problems require modern solutions, and the problems posed by AI browsers with automated agentic browsing need new solutions. That’s because the usual defences, like Google Safe Browsing or antivirus tools, weren’t designed for an AI clicking links on your behalf. They’re catching up, without a doubt, but it’s still uncharted territory.
For example, when researchers compared AI browsers to standard ones, the difference was stark. Google’s Safe Browsing could block known bad URLs, but when scammers spun up fresh domains—“wellzfargo-security.com” instead of “wellsfargo.com”—the AI agents clicked anyway. They don’t second-guess the domain name; they only see that the page looks like a bank login and dutifully continue.
Worse, prompt injection attacks don’t rely on URLs at all. They live inside legitimate pages or PDFs, invisible to traditional scanning tools.
The only real solution at the current time is human oversight. That takes away from the magical feeling of complete automation in your browser, but in reality, it’s an important step in making sure your browser doesn’t fall for a phishing scam.
AI browsers like Opera’s Neon take regular pauses to ask for human input, making sure that the next step in the plan is okay and the work completed so far is acceptable. It’s those small moments of human interaction that can help to mitigate the issues of agentic AI browsers falling for scams, downloading malware, or worse.
It’s a new dawn for scammers
With a whole new range of scams to work on
AI browsers mark a shift in who the scammer needs to fool. The target is no longer you—it’s your agent. And right now, that agent is far too trusting.
Given the research detailed above found that these systems fell for over 90% of phishing pages and missed nearly every social engineering cue a person would spot in seconds, it’s clear that human oversight is absolutely vital—even if it’s often humans that are the weak link.












