OpenAI outlines an AI safety plan, allowing the board to reverse decisions. According to a plan published on the artificial intelligence business OpenAI’s website on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes providing a board with the ability to reverse safety judgments.
Microsoft (MSFT.O), which supports the OpenAI project, won’t use its most recent technology until it has proven to be risk-free in specific fields like cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee responsible for reviewing safety reports and delivering them to its board of directors and management. Executives will be responsible for making choices, but the board can cancel such decisions.
Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been astounded by the ability of generative artificial intelligence technology to write poems and essays. Still, it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people.
In April, a group of executives and professionals in the artificial intelligence industry signed an open letter. The statement called for a six-month freeze on developing more powerful systems than OpenAI’s GPT-4. The letter cited possible threats to society. More than two-thirds of Americans are concerned about the potential negative effects of artificial intelligence, and 61% believe that it could endanger society, according to a May study by Reuters and Ipsos.
Comment Template