Generative AIs (artificial intelligence) such as the very popular ChatGPT, can shorten the time it takes to complete tasks and enhance the quality of outputs by combining the processing power of algorithms with the crowdsourced knowledge and expertise contained within its training data.
But it can also be harmful to companies, if the employees willingly input confidential data into the AIs to help them with their work. This is why the giant Korean company has decided to banned it from its offices.
The Edge Markets reports that Samsung has prohibited the use of popular generative AI tools by employees after discovering that sensitive code was uploaded to the platform, which could jeopardize the security of the company’s data.
The company is concerned about data transmitted to AI platforms such as Google Bard and Bing, which are stored on external servers and could potentially be accessed by other users.
The company conducted a survey on the use of AI tools internally and found that 65% of respondents believed that such services pose a security risk. Earlier in April, Samsung engineers accidentally leaked internal source code by uploading it to ChatGPT.
As a result, Samsung is creating its own internal AI tools for translation, summarising documents, and software development. The company is also developing ways to block the upload of sensitive company information to external services.
The new Samsung policy prohibits the use of generative AI systems on company-owned computers, tablets, and phones, as well as on its internal networks, but it does not affect the company’s devices sold to consumers. The company has warned employees that breaking the new policies could result in termination of employment.