AI browser extensions are a security nightmare

AI browser extensions are a security nightmare

Elaine Atwell by Elaine Atwell on

Since the public release of OpenAI’s ChatGPT, AI-powered browser extensions have proliferated wildly.

There are hundreds of them – search for “AI” in the Chrome Web Store and you’ll get tired of scrolling long before you reach the end of the list.

These browser extensions run the gamut in terms of what they promise to do: some will summarize web pages and email for you, some will help you write an essay or a product description, and still others promise to turn plaintext into functional code.

The security risks posed by these AI browser extensions also run the gamut: some are straightforward malware just waiting to siphon your data, some are fly-by-night operations with copy + pasted privacy policies, and others are the AI experiments of respected and recognizable brands.

We’d argue that no AI-powered browser extension is free from security risk (browser extensions in general are notoriously dangerous) but right now, most companies don’t even have policies in place to assess the types and levels of risk posed by different extensions. And in the absence of clear guidance, people all over the world are installing these little helpers and feeding them sensitive data.

The risks of AI browser extensions are alarming in any context, but here we’re going to focus on how workers employ AI and how companies govern that use. We’ll go over three general categories of security risks, and best practices for assessing the value and restricting the use of various extensions.

Malware posing as AI browser extensions

The most straightforward security risk of AI browser extensions is that some of them are simply malware.

On March 8th, 2023, Guardio reported that a Chrome browser extension called “Quick access to Chat GPT” was hijacking users' Facebook accounts and stealing a list of “ALL (emphasis theirs) cookies stored on your browser–including security and session tokens…” Worse, even though the extension had only been in the Google Chrome store for a week, it had been downloaded by over 2,000 users per day.

In response to this reporting, Google removed this particular extension, but more keep cropping up. As we mentioned earlier, security problems are a perennial issue in the browser extension space, and we’ve yet to see meaningful action taken to stamp them out.

This situation would likely shock the millions of users who download browser extensions, who assume that a product available on Chrome’s store and advertised on Facebook had passed some sort of quality control. To quote the Guardio article, this is part of a “troublesome hit on the trust we used to give blindly to the companies and big names that are responsible for the majority of our online presence and activity.”

What’s particularly troubling is that malicious AI-based extensions (including the one we just mentioned) can behave like legitimate products, since it’s not difficult to hook them up to ChatGPT’s API. In other forms of malware – like the open source scams poisoning Google search results – someone will quickly realize they’ve been tricked once the tool they’ve downloaded doesn’t work. But in this case, there are no warning signs for users, as the browsing experience stays the same. The malware can live in their browser (and potentially elsewhere) as a comfortable parasite.

The security risks of legitimate AI-powered browser extensions

Even the most die-hard AI evangelist would agree that malicious browser extensions are bad, and we should do everything in our power to keep people from downloading them.

Where things get tricky is when we talk about the security risks of legitimate AI browser extensions.

Here are a few of the potential security issues:

  1. Sensitive data you share with a generative AI tool could be incorporated into its training data and viewed by other users. For a simplified version of how this could play out, imagine you’re an executive looking to add a little pizazz to your strategy report, so you use an AI-powered browser extension to punch up your writing. The next day, an executive at your biggest competitor asks the AI chatbot what it thinks your company’s strategy will be, and it provides a surprisingly detailed and illuminating answer!

    Fears of this type of leak have driven some companies – including Verizon, Amazon, and Apple – to ban or severely restrict the use of generative AI. As The Verge’s article on Apple’s ban explains: “Given the utility of ChatGPT for tasks like improving code and brainstorming ideas, Apple may be rightly worried its employees will enter information on confidential projects into the system.”

  2. The extensions or AI companies themselves could have a data breach. In fairness, this is a security risk that comes with any vendor you work with, but it bears mentioning because it’s already happened to one of the industry’s major players. In March 2023, OpenAI announced that they’d recently had a bug “which allowed some users to see titles from another active user’s chat history” and “for some users to see another active user’s first and last name, email address, payment address” as well as some other payment information. Microsoft saw a similar incident, in which their AI data was left vulnerable to attack or manipulation from bad actors.

    How vulnerable browsers extensions are to breaches depends on how much user data they retain, and that is a subject on which many “respectable” extensions are frustratingly vague.

  3. The whole copyright + plagiarism + legal mess. LLMs frequently generate pictures, text, and code that have a clear resemblance to a distinct human source. As of now, it’s an open legal question as to whether this constitutes copyright infringement, but it’s a huge roll of the dice. And that’s not even getting into the quality of the output itself – LLM-generated code is notoriously buggy and often replicates well-known security flaws.

AI developers are making good faith efforts to mitigate all these risks, but unfortunately, in a field this new, it’s challenging to separate the good actors from the bad.

Their efforts to mitigate risks are also far from airtight. OpenAI, for instance, released “ChatGPT Enterprise.” OpenAI promises that, with the enterprise version of their product, “we do not train on your business data or conversations, and our models don’t learn from your usage.” This could provide a more secure way for teams to use ChatGPT.

However, there’s still the risk that employees – particularly those who work on personal devices – may switch between their personal and work accounts on ChatGPT, which could all too easily result in business data being fed to training models.

Even a widely-used extension like fireflies (which transcribes meetings and videos) has terms of service that amount to “buyer beware.” Among other things, they hold users responsible for ensuring that their content doesn’t violate any rules, and promise only to take “reasonable means to preserve the privacy and security of such data.” Does that language point to a concerning lack of accountability or is it just boilerplate legalese? Unfortunately, you have to decide that for yourself.

AI’s “unsolvable” threat: prompt injection attacks

Finally, let’s talk about an emerging threat that might be the scariest of them all: websites stealing data via linked AI tools.

The first evidence of this emerged on X (formerly Twitter) on May 19th, 2023.

@simonw tweet

This looks like it might be the first proof of concept of multiple plugins - in this case WebPilot and Zapier - being combined together to exfiltrate private data via a prompt injection attack

I wrote about this class of attack here: https://simonwillison.net/2023/Apr/14/worst-that-can-happen/#data-exfiltration

- @simonw View tweet

If that explanation makes you scratch your head, here’s how Willison explains it in his social media posts, using “pizza terms.”

@simonw tweet

If I ask ChatGPT to summarize a web page and it turns out that web page has hidden text that tells it to steal my latest emails via the Zapier plugin then I’m in trouble - @simonw View tweet

These prompt injection attacks are considered unsolvable given the inherent nature of LLMs. In a nutshell: the LLM needs to be able to make automated next-step decisions based on what it discovers from inputs. But if those inputs are evil, then the LLM can be tricked into doing anything, even things it was explicitly told it should never do.

It’s too soon to gauge the full repercussions of this threat for data governance and security, but at present, it appears that the threat would exist regardless of how responsible or secure an individual LLM, extension, or plugin is.

As IBM put it in April 2024: “The only way to prevent prompt injections is to avoid LLMs entirely. However, organizations can significantly mitigate the risk of prompt injection attacks by validating inputs, closely monitoring LLM activity, keeping human users in the loop, and more.”

Defining what data and applications are too sensitive to be shared and communicating these policies with employees should be your first AI project.

What AI policies should I have for employees?

The AI revolution happened overnight, and we’re all still adjusting to this brave new world. Every day, we learn more about this technology’s applications: the good, the bad, and the cringe. Companies in every industry are under a lot of pressure to share how they’ll incorporate AI functionalities into their business, and it’s okay if you don’t have the answers today.

However, if you’re in charge of dictating your company’s AI policies, you can’t afford to wait any longer to set clear guidelines about how employees can use these tools. (If you need a starting point, here’s a resource with a sample policy at the end.)

There are multiple routes you can take to govern employee AI usage. You could forbid it altogether, but an all-out ban is too extreme for many companies, who want to encourage their employees to experiment with AI workflows. Still, it’s going to be tricky to embrace innovation while practicing good security. That’s particularly true of browser extensions, which are inherently outward-facing and usually on by default. So if you’re going to allow their use, here are a few best practices:

Education: Most employees are not aware of the security risks posed by these tools, so they don’t know to exercise caution about which ones to download and what kinds of data to share. Educate your workforce about these risks and teach them how to assess malicious versus legitimate products.

Allowlisting: Even with education, it’s not reasonable to expect every employee to do a deep dive into an extension’s privacy policy before hitting download. With that in mind, the safest option here is to allowlist extensions on a case-by-case basis. When possible, you should offer safer alternatives to dangerous tools, since an outright ban can hurt employees' productivity and drive them to Shadow IT. In this case, look for products that explicitly pledge not to feed your data into their models.

Visibility and Zero Trust Access: You can’t do anything to protect your company from the security risks of AI-based extensions if you don’t know which ones employees are using. In order to learn that, the IT team needs to be able to query the entire company’s fleet to detect extensions. From there, the next step is to automatically block devices with dangerous extensions from accessing company resources.

That’s what we did with 1Password Extended Access Management, which allows admins to detect and block malicious apps and extensions.

But again, simple blocking shouldn’t be the final step in your policy. Rather, it should open up conversations about why employees feel they need these tools, and how the company can provide them with safer alternatives.

Those conversations can be awkward, especially if you’re detecting and blocking extensions your users already have installed. 1Password’s Jason Meller wrote for Dark Reading about the cultural difficulties in stamping out malicious extensions: “For many teams, the benefits of helping end users are not worth the risk of toppling over the already wobbly apple cart.” But the reluctance to talk to end users creates a breeding ground for malware: “Because too few security teams have solid relationships built on trust with end users, malware authors can exploit this reticence, become entrenched, and do some real damage.”

If you’d like to learn more about how 1Password Extended Access Management can help manage and communicate the risks of AI for your team, reach out for a demo!

And if you’d like to keep up with our work on AI and security, subscribe to our newsletter!

Manager of Content Marketing

Elaine Atwell - Manager of Content Marketing Elaine Atwell - Manager of Content Marketing

Tweet about this post