Samsung Employees Allegedly Leak Sensitive Data to ChatGPT Samsung staff reportedly shared confidential information with ChatGPT, inadvertently risking the sensitive data being used to train the AI system and possibly appearing in its responses to other users.

Several Samsung employees reportedly shared confidential information with ChatGPT, a language model chatbot, potentially exposing sensitive data for training the AI system and making it accessible in the chatbot’s responses to other users. According to The Economist Korea, as covered by Mashable, Samsung’s semiconductor division had recently allowed engineers to use ChatGPT, which led to workers leaking secret info on at least three occasions.

One employee asked the chatbot to check a sensitive database source code for errors, another sought code optimization, and a third requested the chatbot to generate meeting minutes from a recorded session. Samsung has since restricted the length of employees’ ChatGPT prompts to a kilobyte or 1024 characters of text and is investigating the three employees involved. The company is also reportedly developing its own chatbot to avoid similar incidents. Engadget has reached out to Samsung for comment.

ChatGPT’s data policy states that user prompts are used to train its models unless users explicitly opt out. OpenAI, the chatbot’s owner, warns against sharing secret information with ChatGPT, as specific prompts cannot be deleted from user history.

Deleting personally identifiable information from ChatGPT requires deleting the user account, a process that can take up to four weeks.

About the Author

News content on ConsumerTech.news is produced by our editorial team. Our daily news provides a comprehensive reading experience, offering a wide view of the consumer technology landscape to ensure you're always in the know.