ChatGPT has seamlessly integrated into our daily tasks, sometimes even edging out Google Search for numerous queries. Leveraging its potential, many users have expanded its capabilities through ChatGPT plugins and extensions, transforming it into a tool for handling intricate tasks.
However, caution is needed: exercise prudence in using ChatGPT and the information you disclose. Reports have surfaced about potential leaks of private conversations, urging users to tread carefully in the AI landscape. Uncover the risky side of ChatGPT in this eye-opening exposé.
An observant reader of Ars Technica has brought to light potential data leakage concerns with ChatGPT, an AI-powered chatbot created by OpenAI.
On Monday, the publication received screenshots indicating that sensitive information from private conversations had been unintentionally exposed.
This included the accidental exposure of personal details, login credentials, and other confidential data.
In the conversations captured by the vigilant reader and shared with Ars, diverse content was exposed, ranging from a dialogue about building presentations to snippets of PHP code.
What stands out as particularly strange is the discovery of troubleshooting tickets related to the pharmacy portal mentioned earlier.
What adds an extra layer of intrigue is that the text within these apparent tickets suggests they originated in 2020 and 2021—well before the launch of ChatGPT.
ChatGPT has a concerning history of data leaks. As per ArsTechnica, in March 2023, a flaw in ChatGPT resulted in the unintended exposure of chat titles.
Subsequently, in November 2023, researchers discovered that asking specific questions could lead to the AI bot revealing large amounts of private data used to train the LLM.
OpenAI informed ArsTechnica that they are actively looking into the reported issues.
Regardless of the investigation outcome, avoiding sharing sensitive information with any AI bot is strongly recommended, especially if you didn’t create it personally. Your data security is paramount, so exercise caution in your interactions with AI.
Comments