Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Connect with us

Hi, what are you looking for?

slide 3 of 2
THE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & LifestyleTHE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & Lifestyle

Technology

Technology

OpenAI CEO visits South Korea to promote AI development

According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI... According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI... According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.

Listen to the article now

OpenAI CEO visits South Korea to promote AI development. Open AI CEO Sam Altman will meet with South Korean President Yoon Suk Yeol to boost AI competitiveness.

Altman visited Israel, Jordan, Qatar, UAE, India, and South Korea this week after meeting with politicians and national leaders across Europe last month to discuss AI’s potential and risks.
“People are focused on not stifling innovation, and any regulatory framework has got to make sure that the benefits of this technology come to the world,” Altman told nearly 100 South Korean businesses on Friday.

Since Microsoft Corp (MSFT.O)-backed OpenAI launched ChatGPT last year, generative AI has grown rapidly and become popular, prompting lawmakers worldwide to address safety concerns.

The EU’s draft AI Act is anticipated to become law this year, while the US is considering updating existing rules for AI.

South Korea’s new AI regulations, less stringent than the EU’s, are pending legislative approval.

A parliament committee passed an AI law draft in February that ensures freedom to produce AI products and services unless regulators judge them harmful to people’s lives, safety, and rights.

In April, South Korea’s Ministry of Science and ICT announced plans to promote local AI development, including providing training datasets for “hyperscale” AI. AI ethics and legislation talks continued.

Naver (035420. KS), Kakao (035720. KS), and LG (003550. KS) are among the few South Korean tech corporations that have established foundation models for artificial intelligence in a market dominated by the US and China.

The startups are pursuing niche or specialized markets that big tech in the US and China has not yet addressed.

LG AI Research chairman Kyunghoon Bae said, “In order for Korean companies to have strength in the global AI ecosystem, each company must first secure specialised technology for vertical AI,” or AI optimized for specific needs.

Naver wants to develop localized AI applications for politically sensitive Middle Eastern and non-English-speaking countries like Japan and Southeast Asia.


Comment Template

You May Also Like

Technology

In Davos 2025, AI dominated the World Economic Forum, marking its shift from futuristic concept to global driver of change. Discussions explored AI’s transformative...

Technology

Anthropic stated on Thursday that the advantages of California’s updated measure, which aims to control the development and deployment of artificial intelligence within the...

Business

By the year’s end, Taco Bell plans to have implemented AI ordering at hundreds of US sites, following two years of testing in a...

Business

The Oversight Board reported Thursday that Meta failed to remove an explicit, AI-generated image of an Indian public figure until it was questioned by...

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok