/
1 min read

OpenAI disables ‘Browse’ Feature after releasing it on ChatGPT App

OpenAI has decided to disable the browse feature on its ChatGPT platform due to concerns about a potential data leak. This decision was made just two weeks after the feature was initially released. The browse feature allowed users to access and interact with external websites through ChatGPT. However, due to privacy and security issues, OpenAI has opted to disable this functionality. By doing so, OpenAI aims to mitigate the risk of exposing sensitive information or compromising user data. This move reflects OpenAI’s commitment to maintaining a high level of data protection and ensuring the privacy of its users.

Within a short span of two weeks after its release, OpenAI has temporarily disabled the browse feature on the ChatGPT app. The feature, which allowed users to access external content, inadvertently displayed unintended information to users. OpenAI acknowledged that when users requested the full text of a URL, ChatGPT displayed the information, leading to privacy concerns. This issue has raised questions about data privacy, with companies banning the use of ChatGPT due to these concerns. Some countries, such as Japan, have warned OpenAI about potential privacy breaches.

Interestingly, OpenAI had previously expressed its intentions to explore ways for content creators and publishers to benefit from their technology. However, the bug found in the browse feature contradicts this objective. OpenAI reiterated that it does not use customer data for training and users of ChatGPT can choose to opt out of data usage for training purposes.

The browse feature, offered as part of the ChatGPT Plus subscription, provided real-time data access to subscribers for a monthly fee of $20. OpenAI has relied heavily on user feedback to improve its features and address cybersecurity and AI safety concerns. The browse feature was no exception, as OpenAI sought to learn from user feedback. This approach reflects OpenAI’s commitment to a democratized model where users have a voice in product improvement. However, the underlying issue of data security still remains to be effectively addressed.

Leave a Reply