Connect with us

Hi, what are you looking for?

DOGE0.070.84%SOL19.370.72%USDC1.000.01%BNB287.900.44%AVAX15.990.06%XLM0.080.37%
USDT1.000%XRP0.392.6%BCH121.000.75%DOT5.710.16%ADA0.320.37%LTC85.290.38%
THE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & LifestyleTHE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & Lifestyle

Economy

Economy

Musk, experts advise against training AI systems stronger than GPT-4.

Photo Credit: Reuters Photo Credit: Reuters
Photo Credit: Reuters Photo Credit: Reuters

Listen to the article now

In an open letter, Elon Musk and a group of artificial intelligence researchers and business leaders called for a six-month freeze on training systems more powerful than OpenAI’s latest model GPT-4, citing possible threats to society and humanity.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people, including Musk, Stability AI CEO Emad Mostaque, DeepMind researchers, and AI heavyweights Yoshua Bengio and Stuart Russell, called for a halt to advanced AI development until shared safety protocols for such designs were developed, implemented, and audited by independent experts.

“Powerful AI systems should be created only after we are convinced that their impacts will be good and their hazards will be manageable,” the letter read.

The letter warned of economic and political disruptions from human-competitive AI systems and urged developers to collaborate with politicians on governance and regulation.

On Monday, Europol warned about the possible misuse of powerful AI like ChatGPT in phishing, misinformation, and criminality. Musk, whose Tesla (TSLA.O) autopilot system uses AI, has reservations about AI.

Microsoft-backed OpenAI’s ChatGPT, released last year, has spurred rivals to create comparable big language models and corporations to integrate generative AI models into their products.

A Future of Life spokeswoman informed Reuters that OpenAI CEO Sam Altman hadn’t signed the letter. OpenAI didn’t comment right away.

“The letter isn’t perfect, but the principle is right: we need to slow down until we better grasp the repercussions,” said NYU professor Gary Marcus, who signed the letter. “They can inflict substantial harm… the large actors are becoming increasingly secretive about what they are doing, which makes it impossible for society to protect against any harms.”


Comment Template

You May Also Like

Business

In the wake of Walmart’s departure as a major stakeholder and a stagnating Chinese e-commerce market, JD.com must persuade investors of its importance. This...

Economy

Friday saw dollar weakness as investors braced for Jackson Hole address by Federal Reserve Chair Jerome Powell, while the yen topped other currencies in...

Politics

  Joe Biden had other plans for his address. Under the current conditions, at least not this year. Tragedies and hardships have left their...

Economy

After a bank official was freed from captivity, activities at Libya’s central bank (CBL) were restored. Musaab Muslamm, chief of the bank’s information technology...

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok