Daily Management Review

Elon Musk And Others Call For A Freeze On AI Because Of "Risks To Society"


03/29/2023




Elon Musk And Others Call For A Freeze On AI Because Of "Risks To Society"
In an open letter highlighting potential hazards to society and mankind, Elon Musk and a group of artificial intelligence specialists and business executives are urging a six-month halt to the development of systems more potent than OpenAI's recently released GPT-4.
 
The fourth version of OpenAI's GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its wide range of applications, including engaging users in human-like conversation, song composition, and document summarization, was unveiled earlier this month. OpenAI is supported by Microsoft.
 
More than 1,000 people, including Musk, signed the letter from the nonprofit Future of Life Institute, which urged a moratorium on the creation of advanced artificial intelligence (AI) until standardized safety guidelines for such systems were created, put into place, and independently audited.
 
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said.
 
There was no comment from OpenAI.
 
The letter outlined the dangers posed to society and civilization by human-competitive AI systems, including potential for political and economic upheaval, and urged developers to collaborate with regulators and legislators on governance and regulatory frameworks.
 
Emad Mostaque, the CEO of Stability AI, DeepMind researchers, Yoshua Bengio, who is sometimes described to as one of the "godfathers of AI," and Stuart Russell, a pioneer in the field of AI research, were co-signatories.
 
The Future of Life Institute is principally sponsored by the Musk Foundation, together with the Silicon Valley Community Foundation and the London-based effective altruism group Founders Pledge, according to the European Union's transparency register.
 
The worries come as the EU police agency Europol on Monday joined a chorus of moral and legal worries over cutting-edge AI, like ChatGPT, and issued a warning about the potential for the system to be abused in phishing scams, disinformation campaigns, and crimes.
 
The UK government announced suggestions for a "adaptable" legal framework for artificial intelligence in the meanwhile.
 
Instead of establishing a new body specifically for the technology, the government's strategy, as stated in a policy document released on Wednesday, would divide responsibility for regulating artificial intelligence (AI) among its regulators for human rights, health and safety, and competition.
 
Musk, whose carmaker Tesla is using AI for an autopilot system, has been vocal about his concerns about AI.
 
Since its debut in 2017, OpenAI's ChatGPT has inspired competitors to expedite the creation of analogous big language models and businesses to use generative AI models into their products.
 
Last week, OpenAI revealed that it had teamed up with about a dozen businesses to integrate their services into its chatbot, enabling ChatGPT customers to place grocery orders through Instacart or make travel arrangements through Expedia.
 
According to a Future of Life spokesman, the letter's signatory, Sam Altman, CEO of OpenAI, has not done so.
 
"The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications," said Gary Marcus, a professor at New York University who signed the letter. "The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize."
 
Opponents asserted that statements regarding the technology's current capabilities had been significantly overblown and charged that the letter's signatories were pushing "AI hype."
 
"These kinds of statements are meant to raise hype. It's meant to get people worried," Johanna Björklund, an AI researcher and associate professor at Umeå University. "I don't think there's a need to pull the handbrake."
 
She suggested that increased openness standards for AI researchers be put in place rather than pausing development. "You should be very clear about how you conduct AI research," the author advised.
 
(Source:www.theprint.com)