A Monday report by Ars Technica highlighted an instance where OpenAI’s viral chatbot, ChatGPT, leaked private conversations, including personal data and passwords of other, unknown users. 

Several screenshots shared by a user showed multiple sets of usernames and passwords apparently connected to a support system used by pharmacy workers.  

The person was using ChatGPT-4. 

Related: Report reveals the dramatic impact AI could soon have on global energy reserves

The leaked private conversation appeared to show an employee attempting to troubleshoot an app; the name of the app and the store number where the problem occurred were featured in the leaked conversation. 

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” the user told Ars Technica. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made — they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

Other random conversations were leaked to the user as well, with one including the details of a yet-to-be-published research paper and another including the name of a presentation. 

“ChatGPT is not secure. Period,” AI researcher Gary Marcus said in response to the report. “If you type something into a chatbot, it is probably safest to assume that (unless they guarantee otherwise), the chatbot company might train on those data; those data could leak to other users.”

He added that the same company could additionally sell that data, or use that data to target ads to users. He said that “we should worry about (Large Language Models) hypertargeting ads in subtle ways.”

Though OpenAI’s privacy policy does not include any mention of targeted ads, it does say that the company may share personal data with vendors and service providers. The policy additionally states that it may de-identify and aggregate personal information which the company might then share with third parties. 

OpenAI told Ars Technica that it is investigating the data leakage, though did not respond to TheStreet’s request for comment regarding the report. 

Related: Taylor Swift is the latest victim of ‘disgusting’ AI trend

More privacy troubles for ChatGPT

This came the same day that the Garante, Italy’s data protection authority, told OpenAI that ChatGPT may be in violation of one or more data protection rules. 

The Garante banned ChatGPT last year for breaching European Union (EU) privacy rules, but reinstated the chatbot after OpenAI shipped a number of fixes, including the right of users to not consent to the use of their personal data in the training of OpenAI’s algorithms. 

The organization said in a statement that OpenAI has 30 days to submit counterclaims concerning the alleged breaches. 

Infringements of the EU’s General Data Protection Regulation (GDPR), which was introduced in 2018, could result in a fine of up to 4% of the company’s annual revenue from the previous year. 

OpenAI said in an emailed statement that it believes its practices remain in line with GDPR and other privacy laws, adding that the company plans to “work constructively” with the Garante. 

“We want our AI to learn about the world, not about private individuals,” an OpenAI spokesperson said. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

Contact Ian with AI stories via email, [email protected], or Signal 732-804-1223.

Related: Veteran fund manager picks favorite stocks for 2024