//
2 mins read

Samsung has banned employees from using generative AI tools like ChatGPT after April internal data leak

Samsung

Samsung has recently imposed a ban on the use of generative AI tools like ChatGPT and Google Bard by its employees, citing security concerns. The company is worried that these tools may leak sensitive information or compromise company data. Here is a brief overview of what happened and why Samsung took this decision.

What are generative AI tools?

Generative AI tools are services that use artificial intelligence to create content such as text, images, music, or code. They can be used for various purposes, such as entertainment, education, research, or productivity. Some of the most popular generative AI tools are ChatGPT and Google Bard.

ChatGPT is a chatbot developed by OpenAI that can generate realistic and coherent conversations on any topic. It uses a large neural network trained on billions of words from the internet. Users can interact with ChatGPT through a web interface or an API.

Google Bard is a similar service that can generate text based on a given prompt or keywords. It can also write summaries, headlines, captions, slogans, and more. Google Bard is powered by Google’s natural language processing technology and is available through Google Cloud.

What happened with Samsung?

According to Bloomberg News , Samsung banned the use of generative AI tools on its internal networks and company-owned devices in April 2023, after discovering that some of its staff members had unintentionally exposed private information to ChatGPT. The information included confidential source code and business plans.

Samsung learned about the incident after one of its employees posted the source code to an AI portal, where it was spotted by another user who alerted the company. Samsung then investigated the matter and found out that the employee had used ChatGPT to generate comments for the code.

Samsung also found out that other employees had used ChatGPT and Google Bard for various tasks, such as writing emails, reports, presentations, and proposals. The company was concerned that these tools may store user data and use it for training or improving their models, which could pose a security risk.

Samsung issued a memo to its staff, informing them about the ban and asking them not to upload sensitive business information via their personal devices. The memo also said that Samsung was reviewing security measures to create a secure environment for safely using generative AI tools to enhance productivity and efficiency.

Samsung confirmed the authenticity of the memo to Bloomberg News , but did not comment further on the details of the incident or the ban.

Why did Samsung ban generative AI tools?

Samsung’s decision to ban generative AI tools was motivated by several factors, such as:

  • Protecting its intellectual property and trade secrets from potential leaks or thefts.
  • Preventing its competitors or adversaries from accessing its data or strategies.
  • Avoiding legal or ethical issues that may arise from using third-party services that may not comply with its policies or regulations.
  • Maintaining its reputation and credibility as a leader in innovation and technology.

Samsung is not the only company that has faced security challenges with generative AI tools. In 2022, Microsoft also banned its employees from using ChatGPT after it was found that the chatbot had generated racist and sexist remarks. Other companies may also follow suit as generative AI becomes more widespread and powerful.

Generative AI tools are powerful and promising technologies that can offer many opportunities and challenges for individuals and organizations. Samsung’s ban on them is a reminder that they also come with security risks and ethical dilemmas that need to be addressed carefully and responsibly.

Leave a Reply