Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection
Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection ShadowPrompt: The Zero-Click Flaw in Anthropic’s Claude Extension On March 26, 2026, cybersecurity researchers revealed a critical security vulnerability affecting the official Claude Google Chrome Extension. Codenamed ShadowPrompt , this flaw represents a significant shift in how…

ShadowPrompt: The Zero-Click Flaw in Anthropic’s Claude Extension
On March 26, 2026, cybersecurity researchers revealed a critical security vulnerability affecting the official Claude Google Chrome Extension. Codenamed ShadowPrompt, this flaw represents a significant shift in how threat actors can target browser-based artificial intelligence tools. The vulnerability allowed malicious actors to manipulate the AI assistant without any direct interaction from the user.
The discovery was led by Oren Yomtov, a researcher at Koi Security, who detailed how the flaw could be weaponised via any website. This discovery is particularly concerning for organisations prioritising AI strategy and integration, as it targets the trusted interface between the user and the LLM. The vulnerability was responsibly disclosed to Anthropic in December 2025, several months before the details were made public.
The Danger of Zero-Click Exploitation
What makes ShadowPrompt uniquely dangerous is its "zero-click" nature. In typical phishing or malware scenarios, a user must usually click a link, download a file, or grant a permission prompt for an attack to succeed. However, ShadowPrompt bypassed these traditional security barriers entirely, requiring no such interaction to compromise the session.
Koi Security researcher Oren Yomtov highlighted the invisibility of the attack, stating: "The victim sees nothing." He further explained that there were "no clicks, no permission prompts," and that simply visiting a compromised page allowed an attacker to "completely control your browser" within the context of the AI assistant. This level of silent execution makes it nearly impossible for a standard user to detect that their Claude extension is being manipulated in real-time.
Implications for Australian Businesses
For Australian business owners and IT managers, the ShadowPrompt flaw highlights a growing risk in the modern cybersecurity landscape. Many local firms encourage the use of browser extensions to increase productivity, yet these tools often operate with high levels of privilege. This vulnerability proved that an employee merely browsing a compromised or malicious web page could trigger an exploit that hijacks their AI interactions.
- Silent Injection: Malicious prompts are sent to the AI as if the user wrote them.
- No User Warning: The extension does not ask for permission before executing the injected prompt.
- Immediate Execution: The exploit occurs the moment the malicious page is loaded in the background.
The risk extends beyond simple text generation, as many businesses now use these tools for sensitive data analysis or drafting internal communications. When an attacker can inject prompts silently, they can effectively steer the AI to leak information or perform unauthorised tasks. This breach of the trust boundary between a website and a browser extension is a wake-up call for those managing AI agent deployment across corporate networks.
While Anthropic received the initial report in December 2025, the public disclosure serves as a reminder of how quickly "trusted" extensions can become attack vectors. Understanding the technical mechanics of how these prompts are injected is the first step in defending against similar cross-site vulnerabilities in the future.
Anatomy of the Exploit: Allowlist Flaws and DOM-based XSS
The ShadowPrompt exploit did not rely on a single catastrophic failure. Instead, it was the result of chaining two distinct security flaws that, when combined, created a silent path for unauthorised access. This method of "chaining" is a common tactic used by sophisticated actors to bypass modern security layers. In this instance, the vulnerability was born from an overly permissive origin allowlist and a Document Object Model (DOM)-based cross-site scripting (XSS) vulnerability.
The Problem with Broad Origin Allowlists
The first major issue lay within the extension's internal configuration and how it handled "trusted" sources. It featured an origin allowlist designed to ensure only authorised domains could communicate with the AI assistant. However, this allowlist was configured with a pattern that was far too broad: '(.claude.ai)'. This setting permitted any subdomain matching that pattern to send prompts directly to the extension for execution.
For Australian businesses managing complex cloud solutions, this highlights the risk of "trust by association." While the pattern was intended to facilitate seamless integration across the Anthropic ecosystem, it lacked the granular control necessary to verify the specific intent of a request. Because the extension assumed any subdomain ending in ".claude.ai" was safe, it essentially left the door unlocked for any sub-asset that could be compromised.
Exploiting the Arkose Labs CAPTCHA Component
The second flaw involved a document object model (DOM)-based cross-site scripting (XSS) vulnerability. This specific bug was discovered in a third-party CAPTCHA component provided by Arkose Labs. Unlike traditional XSS attacks that involve malicious scripts being processed on a server, DOM-based XSS occurs entirely within the user's browser. The malicious script manipulates the structure of the web page to execute arbitrary JavaScript code in the context of that page.
The critical intersection occurred because this vulnerable Arkose component was hosted on a specific domain: 'a-cdn.claude[.]ai'. This domain fell directly within the extension's trusted allowlist pattern. By exploiting the XSS vulnerability on this specific CDN domain, a threat actor could execute code that the Claude extension viewed as coming from a legitimate, internal source. This bypasses traditional cybersecurity boundaries that usually prevent one website from talking to another's extensions.
How the Extension Validated Malicious Requests
When the injected JavaScript fired a prompt, the Claude extension performed a simple check: is this request coming from a trusted domain? Because the request originated from the allow-listed 'a-cdn.claude[.]ai', the extension accepted it without further verification. The report noted that the extension allowed the prompt to land in the sidebar "as if it's a legitimate user request simply because it comes from an allow-listed domain."
This failure to distinguish between a script-generated command and an actual human interaction is a significant hurdle in secure AI agent deployment. By successfully chaining these flaws, the attacker effectively hijacked the identity of the user. The AI assistant processed the injected prompt with the same level of authority it would give to the person sitting at the keyboard. This technical oversight turned a standard web component into a direct pipeline for prompt injection.
The ability for a trusted domain to be turned against the user in this manner demonstrates just how easily standard browser interactions can be weaponised by those looking to bypass traditional security controls.
The Hidden Threat: How Attackers Bypassed User Interaction
The true danger of the ShadowPrompt vulnerability lies in its stealthy execution method. Attackers could exploit the flaw by embedding the vulnerable Arkose component within a hidden <iframe> on any website. Because the <iframe> is invisible to the user, the malicious activity occurs entirely in the background without any visual cues or performance drops.
To trigger the exploit, a malicious script on the attacker-controlled page would send an XSS payload via 'postMessage' to the hidden iframe. This specific communication method is designed to allow different windows or frames to talk to one another securely. However, the flaw turned this feature into a weapon, allowing the script to execute arbitrary JavaScript within the trusted context of the Claude-associated domain.
Automating the AI Assistant
Once the injected script was running, it would fire a prompt directly to the Claude extension. This prompt would appear in the Claude sidebar as if it were a standard user interaction. Because the extension believed the request came from a trusted source, it would process the command immediately without asking for user confirmation.
Security researcher Oren Yomtov warned that this method meant "an attacker completely controls your browser" in the context of the AI assistant. This level of control allows an adversary to dictate what the AI does, what it sees, and how it responds. For Australian firms relying on cybersecurity frameworks, this represents a total bypass of traditional user-intent verification.
The "zero-click" nature of this flaw meant users had no visual indication that their AI assistant was being manipulated in real-time. There were no pop-up windows, permission requests, or warnings that an external site was communicating with their Claude extension. A user could simply be reading a news article or checking a blog while their AI assistant was being fed malicious instructions in the background.
A Shift in the Attack Surface
This vulnerability demonstrates a shift in how threat actors view the browser environment. By targeting the communication between the browser and the AI extension, attackers found a way to operate underneath the layer of standard security alerts. For businesses using managed IT services, this highlights why monitoring browser extension behaviour is just as critical as monitoring network traffic.
The technical sophistication of using 'postMessage' ensures that the attack is difficult to detect with basic web filters. The script acts as a silent intermediary, translating the attacker's intent into a format that the Claude extension accepts as legitimate. This creates a seamless, automated loop where the attacker can query the AI and receive data without the user ever touching their keyboard.
Furthermore, the exploit remains active for as long as the malicious web page is open in any tab. Even if the user is working in a completely different window, the hidden iframe continues to facilitate the prompt injection. This persistence makes it a powerful tool for long-term data gathering or session hijacking within the AI environment.
IT managers must recognise that the trust placed in browser-based cloud solutions can be exploited if the implementation of those tools is flawed. The ShadowPrompt incident shows that even highly reputable AI providers can inadvertently create pathways for "zero-click" attacks. This lack of interaction requirements means that the traditional "think before you click" advice is no longer enough to protect sensitive business data.
The ability to manipulate the AI assistant is only the first stage of the risk, as the actual data accessible to the attacker poses a much greater threat to corporate privacy.
Data Theft and Impersonation: The Risks to Sensitive AI Conversations
The ShadowPrompt vulnerability is more than a technical curiosity; it represents a direct threat to corporate data integrity. By exploiting the zero-click flaw, an adversary could gain the ability to steal sensitive information that most users assume is protected by their browser's sandbox. Among the most critical items at risk are session access tokens, which act as the primary digital keys to a user’s active account session. Without these tokens being properly secured, the entire concept of a private AI interaction is rendered void.
Once a threat actor secures these tokens, they can often maintain access to the AI service even after the victim has closed the malicious tab. This persistent access bypasses standard login procedures and multi-factor authentication in some scenarios. For any business using cloud solutions, this level of unauthorised entry is a Tier 1 security incident that requires immediate remediation.
Exposure of Proprietary Business Intelligence
Beyond session tokens, the flaw granted threat actors full access to a user's conversation history with the Claude AI agent. In a corporate environment, these chat logs often contain proprietary business data, ranging from internal strategy notes to sensitive codebase snippets. If an attacker extracts this history, they effectively possess a detailed map of the company’s internal operations and future plans.
Many professionals use AI assistants to summarise confidential meetings or draft sensitive financial reports. The ShadowPrompt exploit turns these productivity gains into a significant liability by exposing the entire backlog of interactions. This exposure of business intelligence can lead to competitive disadvantage or even regulatory non-compliance for Australian firms that handle sensitive client data.
Impersonation and Malicious Interaction
Perhaps the most alarming aspect of the ShadowPrompt flaw was the ability for attackers to perform actions on behalf of the victim. According to researchers, this included high-risk activities such as "sending emails impersonating them" to colleagues or clients. Because the prompt appears to come from the legitimate user, the recipient has little reason to doubt the authenticity of the message.
Attackers could also use the AI's trusted voice to perform social engineering within the chat interface itself. Malicious prompts could be injected to "ask for confidential data" from the user under the guise of the AI assistant's normal functions. Imagine a scenario where the AI suddenly asks for a password to "verify your identity" for a specific task. This creates a scenario where the tool intended to help the employee becomes a primary vector for data exfiltration.
A Breach of the Security Boundary
For Australian IT managers, this vulnerability represents a fundamental breach of trust in the security boundary between websites and browser-based tools. We rely on the browser to keep different sites isolated, yet ShadowPrompt allowed a random website to reach into a trusted extension. This makes the evaluation of AI agent deployment protocols more critical than ever before.
The risk isn't just about the loss of a single password, but the compromise of the entire interaction layer between the human and the machine. Ensuring that these tools are properly sandboxed is now a core requirement for maintaining a robust cybersecurity posture. Organisations must understand that the AI assistant effectively has the keys to their digital workspace if it is not strictly governed and updated. The technical gravity of these risks forced a swift response from the developers involved to ensure the safety of their global user base.
Securing the AI Assistant: The Anthropic and Arkose Labs Response
Following the responsible disclosure of the ShadowPrompt vulnerability in December 2025, both Anthropic and the third-party provider, Arkose Labs, took immediate action to secure the platform. They have since released comprehensive patches to address the security loopholes identified by researchers. These updates are designed to close the communication gaps between the Claude extension and the web environments it interacts with.
The remediation process focused on two primary technical failures. First, developers tightened the origin allowlist within the Chrome extension to prevent unauthorised domains from sending prompts. By moving away from an overly permissive pattern that matched any subdomain of claude.ai, they ensured that only strictly verified sources could communicate with the AI sidebar.
Resolving the CAPTCHA Vulnerability
Simultaneously, Arkose Labs addressed the critical document object model (DOM)-based XSS vulnerability within their CAPTCHA component. This specific component, hosted on the a-cdn.claude[.]ai domain, was the weak link that allowed malicious scripts to execute arbitrary JavaScript. By patching this component, the method of using a hidden iframe to bypass security barriers has been effectively neutralised.
It is important for IT managers to note that these fixes do not just patch a bug but represent a fundamental hardening of the extension's architecture. Businesses relying on cloud solutions must understand that the security of an AI tool is often determined by its integration points rather than its core intelligence. This collaboration between Anthropic and Arkose Labs highlights the necessity of third-party security audits in the AI software supply chain.
Focus on Implementation, Not the AI Model
Security researchers have been quick to emphasize a vital point: the ShadowPrompt vulnerability was not a flaw in the Claude AI model itself. The large language model (LLM) remained secure throughout the discovery and disclosure process. Instead, the risk resided entirely in the implementation of the browser extension and how it handled incoming messages from the web.
This distinction is critical for organisations developing an AI strategy. It serves as a reminder that even the most advanced AI models can be undermined by traditional web vulnerabilities like XSS and permissive allowlists. Protecting the user session requires a holistic approach to cybersecurity that looks beyond the AI's output to the code that delivers it to the desktop.
Immediate Steps for Australian Businesses
To mitigate the risk of ShadowPrompt and similar zero-click exploits, Australian businesses should take several proactive steps. The most important action is ensuring that all employees are running the latest version of the Claude Google Chrome Extension. Automatic updates should be enabled across the organisation to ensure patches are applied as soon as they are released by Anthropic.
- Audit Browser Extensions: Regularly review and limit the number of active extensions permitted on corporate devices.
- Enforce Browser Updates: Ensure Google Chrome is updated to the latest version to benefit from the latest security sandboxing features.
- User Education: Inform staff about the risks of "silent" attacks and the importance of using official, updated AI tools.
- Endpoint Management: Use managed IT services to monitor and deploy patches across all remote and office-based workstations.
The ShadowPrompt incident highlights the ongoing security challenges associated with integrating AI tools directly into the browser environment. As we move toward more complex AI agent deployment scenarios, the boundary between a website and a high-privilege tool will continue to be a primary target for attackers. Maintaining a vigilant stance on extension security is now a mandatory requirement for any modern digital workplace.
Sources
- https://thehackernews.com/2026/03/claude-extension-flaw-enabled-zero.html
- https://www.purpleshieldsecurity.com/post/claude-chrome-extension-vulnerability-zero-click
- https://www.linkedin.com/posts/mahesh-ramichetty-160b8121_zero-click-remote-code-execution-in-claude-activity-7427336563174395904-g8iM
- https://www.oasis.security/blog/claude-ai-prompt-injection-data-exfiltration-vulnerability
- https://www.linkedin.com/posts/thecyphere_new-zero-click-flaw-in-claude-desktop-extensions-activity-7440654496193634306-Jnjw
- https://cyberpress.org/claude-desktop-extensions-zero-click-rce-flaw/
Future-Proof Your Business with OnIT Solutions
Staying on top of AI and technology trends is critical for Australian SMBs. Our team helps you cut through the noise and implement the right solutions for your business. Talk to our AI Strategy team about what today's developments mean for your organisation — or explore our full range of Managed IT Services.
