Seemingly in response to concerns over data privacy and security concerns related to how data is used to train the AI models powering ChatGPT, OpenAI is introducing the ability to turn off chat history in ChatGPT to prevent conversations from being used to train and improve models.
According to the AI research and development firm that has ushered in a new way of working defined by highly intelligent “copilots” capable of generating text, images and other media, the controls will be rolling out to all users.
The controls can be found in ChatGPT’s settings and can be changed at any time. Previously, OpenAI had an opt-out process for users who wanted to protect their data.
When users disable chat history, new conversations will be retained for 30 days and will be reviewed only when needed to monitor for abuse. Then, they will be permanently deleted, the company says in a new blog.
In addition, the company says it is working on a new ChatGPT Business subscription for professionals who want more control over their data, as well as enterprises who want to manage their end users.
OpenAI says the Chat GPT Business subscription offering–which will be launched in the coming months– for the follow its API’s data usage policies, meaning that end users’ data won’t be used to train models by default.
The company is also introducing a new Export option in settings to make it easier for users to export their ChatGPT data and understand what information ChatGPT stores. Users that use this will receive a file with conversations and all other relevant data in email.
OpenAI has previously said that its large language models (LLMs) are trained on a broad range of data, including publicly available content, licensed content and content generated by human reviewers. The company has pledged to not use data to sell services, advertise, or build profiles of people. Instead, data is used to help improve the models powering new AI tools.
“While some of our training data includes personal information that is available on the public internet, we want our models to learn about the world, not private individuals,” the company said in a blog earlier this month. “So we work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems.”
Data privacy and security have been major concern of IT and security leaders, with some even calling for organizations to block unsanctioned use of ChatGPT and similar tools.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply