Written by 5:42 am Technology

Companies Prioritize Addressing Security Risks Posed by AI Chatbots

According to a recent report by Bloomberg News, Samsung has temporarily banned its employees from using generative AI tools such as ChatGPT on the company’s internal networks and owned devices, citing security risks. In a memo sent to staff, the South Korean multinational conglomerate emphasized the need to create a secure environment to safely utilize generative AI tools.

The company is reportedly concerned about the data transmitted to AI platforms being stored on external servers, making it challenging to retrieve and delete. While ChatGPT has gained popularity globally, people have been using the tool to summarize reports, leading to sensitive information sharing, which OpenAI, the developers of ChatGPT, may have access to.

Several other companies, including JPMorgan, Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo, have taken steps to monitor the use of such tools due to security concerns. The New York City schools have also banned ChatGPT over misinformation fears.

According to The Verge, the privacy factor of ChatGPT depends on how a user accesses the service. If a company uses ChatGPT’s API, conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models. However, this is not the case if a person inputs the text into the general web interface using its default settings.

(Visited 2 times, 1 visits today)
Close