ChatGPT Didn’t Leak The Private Conversations, The User’s Account Was Compromised: Report


Recently, a ChatGPT user noticed unrecognized chats with the AI chatbot in the history section. Upon examining, the user realized that the history contains several usernames and passwords, potentially linked to a pharmacy support system that uses the AI chatbot. The user Chase Whiteside sent over the screenshots of the conversation to the publication ArsTechnica, which then covered this as a story.

Original Story Mentions It Could Be ChatGPT That Leaked The Conversation

Originally, the story mentioned that ChatGPT is leaking private conversations that include some random users’ login credentials and other personal details. The report also mentioned other discussions that included the name of a presentation someone was working on and details of an unpublished research proposal. Hence, it looked like ChatGPT was randomly showing the user a session someone else conducted with the AI chatbot.

image

However, OpenAI Mentions Otherwise

However, in an updated report, OpenAI says Whiteside’s account was compromised. Per the company, the unauthorized logins came from Sri Lanka, whereas Whiteside logs into his account from New York. The company considers it “an account takeover in that it’s consistent with the activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access.” In other words, a bad actor accessed the account to make the conversation.

Whiteside’s Account Was Accessed From Sri Lanka

On the other hand, Whiteside believes that it is unlikely that his account was compromised as he uses a nine-character password with upper and lower case characters. Nonetheless, the chat histories appeared all at once during a break from using the account on Monday. Moreover, OpenAI’s statement suggests that it isn’t ChatGPT that is randomly leaking users’ details. Instead, the account was compromised and logged into Sri Lanka.

Even though the company has confirmed that the ChatGPT account was taken over, this isn’t the first time a suspected data breach has been reported. Back in March 2023, a ChatGPT bug leaked several chat titles. Further, in November 2023, researchers could access private data to train the chatbot. Without security features like 2FA or a history of recent logins, users must use the AI chatbot cautiously, suggests a Business Today report.

You can follow Smartprix on TwitterFacebookInstagram, and Google News. Visit smartprix.com for the most recent newsreviews, and tech guides.



We will be happy to hear your thoughts

Leave a reply

Funtechnow
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart