📣 Creators, Exciting News!
Gate Square Certified Creator Application Is Now Live!
How to apply:
1️⃣ Open App → Tap [Square] at the bottom → Click your avatar in the top right
2️⃣ Tap [Get Certified] under your avatar
3️⃣ Once approved, you’ll get an exclusive verified badge that highlights your credibility and expertise!
Note: You need to update App to version 7.25.0 or above to apply.
The application channel is now open to KOLs, project teams, media, and business partners!
Super low threshold, just 500 followers + active posting to apply!
At Gate Square, everyone can be a community leader! �
AI Browsers Under Fire: Hidden Web Prompts Can Hijack Your Agent and Connected Accounts
Security researchers are warning that artificial intelligence (AI)-powered browsers and agents from Perplexity, OpenAI, and Anthropic face escalating risks of covert prompt injection attacks and privacy breaches, potentially exposing user data through connected accounts and APIs.
AI Browser Vulnerabilities Raise Security Concerns
AI browsers and agents from Perplexity, OpenAI, and Anthropic are redefining how users interact with the web—but experts say the convenience comes at a cost.
According to security audits and research reviewed, vulnerabilities in these systems allow malicious actors to embed hidden instructions in websites that AI tools may unknowingly execute.
These attacks, known as covert or indirect prompt injections, can manipulate AI agents into performing unauthorized actions—such as leaking sensitive information, executing code, or redirecting users to phishing sites—without explicit user consent.
How Attacks Exploit AI Agents
In covert prompt injection scenarios, attackers hide malicious commands within a webpage’s text, metadata, or even invisible elements. Once an AI ingests that data, the commands can override user intent and cause the agent to take unwanted actions. Tests show that unprotected AI browsers can fall victim to such tricks nearly one in four times during controlled experiments.
Perplexity, OpenAI, and Anthropic: Key Risks Identified
Documented Incidents and Industry Warnings
Researchers and cybersecurity firms, including Brave, Guardio, and Malwarebytes, have published findings showing that even simple online content can compromise AI agents. In one test, a Reddit post forced an AI browser to run phishing scripts. Reports from several top tech publications cautioned that these issues could lead to unauthorized data access or even financial theft.
The Dangers of Account Integration
Security analysts have raised red flags about AI agents linked to passwords or APIs. Allowing such integrations can expose email accounts, cloud drives, and payment platforms. Techcrunch and Cybersecurity Dive both reported instances where AI agents were tricked into revealing or manipulating sensitive information through injected commands.
Recommended Safety Measures and Outlook
Experts urge users to limit permissions, avoid granting AI agents password-level access, and monitor AI logs for anomalies. Developers are also advised to implement isolation systems and prompt filters. Some researchers even recommend using traditional browsers for sensitive actions until AI tools receive stricter safeguards.
While OpenAI, Anthropic, and Perplexity have likely heard about the challenges, cybersecurity professionals warn that AI-driven browsing remains a high-risk area in 2025. As these companies push further into autonomous web interaction, industry observers say transparency and stronger security standards are essential before such tools become mainstream.
FAQ 🧭