Connect with us

Hi, what are you looking for?

DOGE0.070.84%SOL19.370.72%USDC1.000.01%BNB287.900.44%AVAX15.990.06%XLM0.080.37%
USDT1.000%XRP0.392.6%BCH121.000.75%DOT5.710.16%ADA0.320.37%LTC85.290.38%
THE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & LifestyleTHE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & Lifestyle

Technology

Technology

SEC to probe restrictive non-disclosure agreements alleged by OpenAI whistleblowers

Listen to the article now

According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.

According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments.
The OpenAI project, which is supported by Microsoft  will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company’s board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions.
Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people.
An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI’s GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.

OpenAI whistleblowers have petitioned the SEC to investigate the artificial intelligence company’s stringent non-disclosure agreements.
“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the Commissioners to approve an investigation into OpenAI’s prior NDAs immediately and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules
According to the letter, the AI company forced employees to sign agreements waiving their federal whistleblower compensation rights.
The whistleblowers asked the SEC to penalize OpenAI for each unlawful agreement.
In an email, an SEC representative stated it did not comment on whistleblower submissions.
OpenAI should have commented on the letter.
“Artificial intelligence is rapidly and dramatically changing technology,” stated Sen. Grassley, whose office received the whistleblower letter. He said, “OpenAI’s policies and practices appear to chill whistleblowers’ right to speak up and receive compensation for protected disclosures.”

The whistleblowers claimed that OpenAI imposed too restrictive employment, severance, and non-disclosure agreements to its employees, which might have penalized anyone who reported OpenAI to federal authorities.
The letter states that OpenAI required workers to obtain prior consent from the company before releasing information to federal regulators and did not exempt securities violations to the SEC in employee non-disparagement provisions.

The letter also requested that OpenAI produce all non-disclosure contracts, including employment, severance, and investor agreements, for inspection by the SEC.
As AI models get more sophisticated, OpenAI’s chatbots with generative AI capabilities, including human-like interactions and text-based image creation, have raised safety concerns.
As it trains its next AI model, OpenAI organized a Safety and Security Committee overseen by board members, including CEO Sam Altman, in May.


Comment Template

You May Also Like

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok