Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Connect with us

Hi, what are you looking for?

slide 3 of 2
THE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & LifestyleTHE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & Lifestyle

Technology

Technology

OpenAI is unlikely to offer board seat to Microsoft or other investors

According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI... According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI... According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.

Listen to the article now

OpenAI, the company that owns ChatGPT, is not anticipated to give seats on its new board of directors to Microsoft (MSFT.O) and other investors, such as Khosla Ventures and Thrive Capital, according to a source who is familiar with the subject and spoke to Reuters on Tuesday.

During the previous week, OpenAI’s CEO and founder, Sam Altman, was abruptly removed from his position without providing any specific reasons for his departure. This raised concerns among both investors and staff. The promise of a new board was made in exchange for his reinstatement.

Because of Altman’s departure, there is a lot of uncertainty over the firm’s future, which is at the core of the rise in artificial intelligence.

In a statement, Thomas Hayes, head of the hedge firm Great Hill Capital, stated, “I do not know that it is going to be the choice of OpenAI to leave Microsoft off the board.”

He stated that “Microsoft will have something to say about it, given the amount of money they have put behind them,” and argued that it would not be in Microsoft’s best interest “to sit passively” in this situation.

The Information was the first to disclose the news and stated that OpenAI will have a nine-person board. According to the source, the confirmation of the three first directors of the new board is anticipated to take place as soon as this week. These directors are Chair Bret Taylor, former Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo.

D’Angelo is expected to be the sole director left from the previous six-person board that terminated Altman’s employment.

With an investment of over ten billion dollars in OpenAI, Microsoft is one of the most significant investors in OpenAI, which is responsible for the operation of ChatGPT, its viral generative artificial intelligence chatbot.

The company’s chief executive officer, Satya Nadella, had previously stated to CNBC that the governance at the ChatGPT manufacturer needed to be altered regardless of where Altman ended up.

In response to a query on the OpenAI board, a spokeswoman for Microsoft stated, “We will wait until the board officially says something.”

OpenAI and Thrive did not immediately reply to calls for comment, and Khosla declined to comment.


Comment Template

You May Also Like

Technology

AMD achieved a major milestone in 2024, surpassing 100 million gaming chips shipped. Strong demand for Ryzen processors fueled growth, despite a decline in...

Business

AMD faces challenges in the AI chip race as its latest forecast failed to meet investor expectations, causing a stock decline. While the company...

Business

Meta is reportedly considering reincorporating in Texas, challenging Delaware's long-standing dominance as the corporate hub for Fortune 500 companies. A move like this could...

Technology

Apple's cancellation of its N107 AR glasses highlights challenges in augmented reality development, including hardware and cost barriers. With rivals like Meta and Google...

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok