AI Browsers Are Struggling: Why Today’s AI-Powered Web Browsers Still Fail Users

AI Browsers Are Struggling: Why Today’s AI-Powered Web Browsers Still Fail Users

AI-powered browsers were supposed to redefine how we search, shop, and navigate the web. Companies like OpenAI, Perplexity, and The Browser Company have promoted them as the next big leap—tools that can automate tasks, summarize content, and act like intelligent “agents” inside your browser. But in practice, most of these AI browsers are falling far short of expectations.


Recent hands-on testing by The Verge shows that today’s AI browsers are often slow, unreliable, and far more work to use than promised. Despite the hype, they require users to constantly tweak prompts, monitor actions, and repeatedly correct their mistakes.

AI Browsers Have a Basic Usability Problem

Whether using OpenAI’s new Atlas, Perplexity’s Comet, or The Browser Company’s Dia, testers ran into the same issue: interacting with these browsers still feels like talking to any other chatbot—you must “prompt engineer” everything.

As reviewer Victoria Song put it, simply attaching an AI assistant to a browser doesn’t transform the browsing experience. Instead, it forces users to think harder about instructions just to get basic tasks done.

Email Summaries and Automation Tasks Still Don’t Work Well

A major promise of AI browsers is automated email organization. But tests showed that even simple tasks—like finding important unanswered emails—required extremely long, detailed prompts. And even then, results were inconsistent.

Some browsers highlighted irrelevant emails, while others flagged spam or offered vague summaries. Atlas often refused tasks altogether, telling users to refine prompts further—defeating the purpose of automation.

Shopping Assistance Is Also Frustrating

AI browsers also struggled with online shopping, another area where developers claim they excel. While they could research product options, they frequently made basic mistakes such as recommending the wrong colors or sizes.

OpenAI’s Atlas was especially cumbersome, constantly asking the user to confirm actions and taking painfully long to perform simple tasks—like closing a window or adding an item to a cart.

AI Browsers Suffer From the Same Problems as AI Agents

Across all testing, the pattern was clear: AI browsers require constant supervision. They’re slow, prone to errors, and rely heavily on user corrections—making them the opposite of the autonomous helpers they aim to be.

Security Risks Are Even More Concerning

Beyond performance issues, AI browsers carry serious security concerns. Research shows they are highly vulnerable to prompt injection attacks, where hidden instructions embedded in a webpage hijack the AI’s behavior.

Examples from recent studies include:

Perplexity’s Comet being tricked into giving attackers access to bank accounts

OpenAI’s Atlas being manipulated into accessing a user’s Google Drive and deleting files

These vulnerabilities highlight major risks for anyone relying on AI browsers for sensitive tasks.

AI Browsers Still Have a Long Way to Go

Until speed, reliability, and security improve, AI browsers are more work than help. Instead of automating tasks, they create new ones—forcing users to rewrite prompts, double-check every action, and constantly monitor the AI.

As Song summarized, using an AI browser today means spending “a lot of time doing things for AI so that it can sometimes do things for me.”

For now, AI browsers remain intriguing experiments—not practical tools ready to replace traditional browsing.

Post a Comment

0 Comments