Over the past eight months, ChatGPT has impressed millions of people with its ability to generate realistic-looking text, writing everything from stories to code. But the chatbot, developed by OpenAI, is still relatively limited in what it can do.
The large language model (LLM) takes “prompts” from users that it uses to generate ostensibly related text. These responses are created partly from data scraped from the internet in September 2021, and it doesn’t pull in new data from the web. Enter plug-ins, which add functionality but are available only to people who pay for access to GPT-4, the updated version of OpenAI’s model.
Since OpenAI launched plug-ins for ChatGPT in March, developers have raced to create and publish plug-ins that allow the chatbot to do a lot more. Existing plug-ins let you search for flights and plan trips, and make it so ChatGPT can access and analyze text on websites, in documents, and on videos. Other plug-ins are more niche, promising you the ability to chat with the Tesla owner’s manual or search through British political speeches. There are currently more than 100 pages of plug-ins listed on ChatGPT’s plug-in store.
But amid the explosion of these extensions, security researchers say there are some problems with the way that plug-ins operate, which can put people’s data at risk or potentially be abused by malicious hackers.
Johann Rehberger, a red team director at Electronic Arts and security researcher, has been documenting issues with ChatGPT’s plug-ins in his spare time. The researcher has documented how ChatGPT plug-ins could be used to steal someone’s chat history, obtain personal information, and allow code to be remotely executed on someone’s machine. He has mostly been focusing on plug-ins that use OAuth, a web standard that allows you to share data across online accounts. Rehberger says he has been in touch privately with around a half-dozen plug-in developers to raise issues, and has contacted OpenAI a handful of times.
“ChatGPT cannot trust the plug-in,” Rehberger says. “It fundamentally cannot trust what comes back from the plug-in because it could be anything.” A malicious website or document could, through the use of a plug-in, attempt to run a prompt injection attack against the large language model (LLM). Or it could insert malicious payloads, Rehberger says.
Data could also potentially be stolen through cross plug-in request forgery, the researcher says. A website could include a prompt injection that makes ChatGPT open another plug-in and perform extra actions, which he has shown through a proof of concept. Researchers call this “chaining,” where one plug-in calls another one to operate. “There are no real security boundaries” within ChatGPT plug-ins, Rehberger says. “It is not very well defined, what the security and trust, what the actual responsibilities [are] of each stakeholder.”
Since they launched in March, ChatGPT’s plug-ins have been in beta—essentially an early experimental version. When using plug-ins on ChatGPT, the system warns that people should trust a plug-in before they use it, and that for the plug-in to work ChatGPT may need to send your conversation and other data to the plug-in.
Niko Felix, a spokesperson for OpenAI, says the company is working to improve ChatGPT against “exploits” that can lead to its system being abused. It currently reviews plug-ins before they are included in its store. In a blog post in June, the company said it has seen research showing how “untrusted data from a tool’s output can instruct the model to perform unintended actions.” And that it encourages developers to make people click confirmation buttons before actions with “real-world impact,” such as sending an email, are done by ChatGPT.