Daily Management Review

Spread Of ChatGPT Enthusiasm Into US Workplace Alarms Some


Spread Of ChatGPT Enthusiasm Into US Workplace Alarms Some
Despite concerns that have prompted employers like Microsoft and Google to limit its usage, a Reuters/Ipsos poll indicated that many workers across the U.S. are using ChatGPT to assist with basic tasks.
Companies all over the world are debating how to effectively utilise ChatGPT, a chatbot programme that uses generative AI to engage consumers in discussions and respond to a variety of cues. However, security groups and businesses have expressed worries that it can lead to leaks of strategy and intellectual property.
People have reportedly used ChatGPT for things like email composition, document summarization, and completing initial research to assist with their daily work.
An online survey on artificial intelligence (AI) conducted between July 11 and July 17 found that 28% of participants regularly use ChatGPT at work, whereas just 22% said that their employers explicitly permit such external tools.
The credibility interval, a gauge of precision, for the Reuters/Ipsos survey of 2,625 individuals in the US, was roughly 2 percentage points.
25% of individuals surveyed did not know whether their employer allowed the use of the technology, while 10% of those surveyed claimed that their supervisors specifically forbade the use of external AI technologies.
After its November introduction, ChatGPT rose to the position of app with the fastest growth in history. It has sparked both enthusiasm and concern, putting its creator OpenAI in conflict with regulators, especially in Europe, where the firm's huge data collection has garnered condemnation from privacy advocates.
Researchers discovered that similar artificial intelligence AI might repeat material it received during training, creating a possible risk for sensitive information. Human reviewers from other organisations may read any of the created chats.
"People do not understand how the data is used when they use generative AI services," said Ben King, VP of customer trust at corporate security firm Okta (OKTA.O).
"For businesses this is critical, because users don't have a contract with many AIs - because they are a free service - so corporates won't have run the risk through their usual assessment process," King said.
When questioned about the ramifications of individual employees using ChatGPT, OpenAI declined to comment. However, the business cited a recent blog post in which it assured corporate partners that their data would not be utilised to train the chatbot further unless they specifically consented to it.
Google's Bard gathers information such as text, location, and other usage details when users interact with it. Users can request that content supplied into the AI be withdrawn from the company's servers and delete previous activity from their accounts. Upon being questioned further, Google, which is owned by Alphabet, declined to comment.
There were no comments on the issue from Microsoft.
An employee of Tinder in the United States claimed that despite the company's explicit policy against it, staff members still used ChatGPT for "harmless tasks" like sending emails.
"It's regular emails. Very non-consequential, like making funny calendar invites for team events, farewell emails when someone is leaving ... We also use it for general research," said the employee, who declined to be named because they were not authorized to speak with reporters.
Despite Tinder's "no ChatGPT rule," the employee claimed that people still use it in a way that "doesn't reveal anything about us being at Tinder."
Reuters was unable to independently verify how Tinder staff were utilising ChatGPT. According to Tinder, it offers "regular guidance to employees on best security and data practises".
After learning that one of its employees had posted sensitive code to the site in May, Samsung Electronics issued a global ban on employees using ChatGPT and comparable AI technologies.
"We are reviewing measures to create a secure environment for generative AI usage that enhances employees' productivity and efficiency," Samsung said in a statement on Aug. 3.
"However, until these measures are ready, we are temporarily restricting the use of generative AI through company devices."
In June, Reuters reported that Alphabet had warned staff about using chatbots, such as Google's Bard, even as it promoted the initiative internationally.
Google claimed that while Bard can recommend undesirable code, it nevertheless benefits programmers. It added that one of its goals was to be open and honest about the limitations of their technology.
Reuters was informed by certain businesses that they are adopting ChatGPT and comparable platforms while keeping security in mind.
"We've started testing and learning about how AI can enhance operational effectiveness," said a Coca-Cola spokesperson in Atlanta, Georgia, adding that data stays within its firewall.
"Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity," the spokesperson said, adding that Coca-Cola plans to use AI to improve the effectiveness and productivity of its teams.
Meanwhile, Tate & Lyle's Chief Financial Officer Dawn Allen told Reuters that the company was testing ChatGPT after "finding a way to use it in a safe way."
"We've got different teams deciding how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?"
Some workers claim they are completely unable to use the platform on workplace PCs.
According to a Procter & Gamble employee who asked to remain anonymous because they were not authorised to speak to the media, "It's completely banned on the office network, like it doesn't work."
P&G opted not to respond. Whether P&G employees were unable to utilise ChatGPT could not be independently verified by Reuters.
Companies should be cautious, according to Paul Lewis, chief information security officer of cyber security company Nominet.
"Everybody gets the benefit of that increased capability, but the information isn't completely secure and it can be engineered out," he said, citing "malicious prompts" that can be used to get AI chatbots to disclose information.
"A blanket ban isn't warranted yet, but we need to tread carefully," Lewis said.