In an open letter, Elon Musk and a group of artificial intelligence researchers and business leaders called for a six-month freeze on training systems more powerful than OpenAI’s latest model GPT-4, citing possible threats to society and humanity.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people, including Musk, Stability AI CEO Emad Mostaque, DeepMind researchers, and AI heavyweights Yoshua Bengio and Stuart Russell, called for a halt to advanced AI development until shared safety protocols for such designs were developed, implemented, and audited by independent experts.
“Powerful AI systems should be created only after we are convinced that their impacts will be good and their hazards will be manageable,” the letter read.
The letter warned of economic and political disruptions from human-competitive AI systems and urged developers to collaborate with politicians on governance and regulation.
On Monday, Europol warned about the possible misuse of powerful AI like ChatGPT in phishing, misinformation, and criminality. Musk, whose Tesla (TSLA.O) autopilot system uses AI, has reservations about AI.
Microsoft-backed OpenAI’s ChatGPT, released last year, has spurred rivals to create comparable big language models and corporations to integrate generative AI models into their products.
A Future of Life spokeswoman informed Reuters that OpenAI CEO Sam Altman hadn’t signed the letter. OpenAI didn’t comment right away.
“The letter isn’t perfect, but the principle is right: we need to slow down until we better grasp the repercussions,” said NYU professor Gary Marcus, who signed the letter. “They can inflict substantial harm… the large actors are becoming increasingly secretive about what they are doing, which makes it impossible for society to protect against any harms.”
Comment Template