Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development
Researchers Uncover Vulnerabilities in ChatGPT Plug-Ins
Potential Zero-Click Account Takeover Exploit Is Among Identified VulnerabilitiesResearchers at security firm Salt Security have uncovered multiple vulnerabilities in third-party plug-ins used in ChatGPT, including a zero-click account takeover flaw that was triggered when users attempted to install the plug-in using their ChatGPT accounts.
See Also: 2024 Threat Landscape: Data Loss is a People Problem
Salt Security researchers uncovered three flaws in OAuth authentication, GitHub and PluginLab - AI third-party plug-ins used in ChatGPT. The flaws stemmed from how the chatbot users attempted to connect to these services using their ChatGPT accounts.
"The severity and risk of the vulnerabilities in any gen AI ecosystem are mainly derived from the plug-in functionality," Yaniv Balmas, vice president of research at Salt Security, said. "As an example, we chose to look specifically at ChatGPT and during our research, we were able to find numerous vulnerabilities in ChatGPT's ecosystem."
The researchers said the most critical vulnerability - the zero-click account takeover flaw -stemmed from the AskTheCode plug-in that ChatGPT users employ to query GitHub repositories.
When a user downloads the application, the plug-in creates a new account and asks GitHub for account permission. Though the plug-in generates a unique code to verify the user request, researchers found that attackers could intercept the code by redirecting the authentication request using a different user ID.
"The issues we found can be quite critical. For example, we showed that when a user uses a specific ChatGPT plug-in that connects ChatGPT to a GitHub account, an attacker would have been able to get full access to the user's entire GitHub account, without any interaction from the user," Balmas told Information Security Media Group.
The second flaw resulted from an OAuth authentication token failure to verify plug-in installation requests within a ChatGPT account. When a user attempts to install any new plug-in using their ChatGPT account, the chatbot redirects them to the plug-in website to obtain their approval.
After receiving the approval, ChatGPT proceeds to install the plug-in without authentication. An attacker can exploit this feature by crafting malicious code to gain access to their email accounts, possibly permitting account takeover attacks, the researchers said.
"Since the attacker is the owner of this plug-in, they can see the private chat data of the victim, which may include credentials, passwords or other sensitive data," the report says.
The researchers exploited the authentication flaw in the Charts by Kesem AI plug-in used in ChatGPT to convert data into charts. The plug-in, developed using the Kesem.ai framework, authenticates a user using Google or Microsoft email accounts and generates a code for their unique identification.
Since the application does not validate the code redirection request, the researchers said, attackers could steal user credentials.
Salt Security researchers alerted the companies, and they mitigated the vulnerabilities, which were initially discovered in 2023. The companies said the flaws have not resulted in any data loss so far.
The Salt Security report is not the first time that researchers have found vulnerabilities in ChatGPT. In December 2023, the company rolled out patches for an image markdown injection flaw that enabled data exfiltration using prompt injection. Prior to that, a vulnerability in a Redis client library allowed some ChatGPT users to see titles from another active user's chat history.
OpenAI did not immediately respond to a request for comment from Information Security Media Group.