Security experts are raising alarms about the data privacy risks posed by AI-powered browser extensions, which are increasingly common in corporate environments. These tools, which offer features like summarization, translation, and content generation, can access and transmit all browser activity, including sensitive financial data, internal communications, and proprietary information, to their developers' servers.
According to a 2025 report from cybersecurity firm Egress, the use of such 'shadow AI' tools by employees, often without IT department approval, creates significant blind spots for data loss prevention systems. The data collected can be used to train AI models or, in worst-case scenarios, be exposed in a breach, posing compliance and intellectual property risks.
Major browser vendors like Google and Mozilla have policies requiring extensions to disclose data collection practices, but enforcement and user awareness remain challenges. The UK's National Cyber Security Centre (NCSC) has issued guidance urging organizations to manage the use of browser extensions as part of their broader AI security policies.
Mitigation strategies include implementing centralized browser management, whitelisting approved extensions, and conducting employee training on the risks of unauthorized AI tools. Security professionals recommend treating browser extensions with the same scrutiny as any other third-party software with network access.