On Thursday, the White House will welcome the CEOs of leading artificial intelligence companies, including Alphabet Inc.’s Google (GOOGL.O) and Microsoft (MSFT.O), to discuss dangers and protections as governments and politicians worldwide focus on the technology.
Generative artificial intelligence apps like ChatGPT have caught the public’s attention this year, prompting corporations to build similar solutions they hope will alter work.
Millions of users have tested such tools, which supporters say can make medical diagnoses, write screenplays, create legal briefs, and debug software, raising concerns about privacy violations, skewed employment decisions, power scams, and misinformation campaigns.
“We aim to have a frank discussion about the risks we see in current and near-term AI development,” said a senior government official, speaking anonymously due to the delicacy of the topic. “Our North Star here is this idea that if we’re going to seize these benefits, we have to start by managing the risks.”
Google’s Sundar Pichai, Microsoft’s Satya Nadella, OpenAI’s Sam Altman, and Anthropic’s Dario Amodei will join Vice President Kamala Harris, Biden’s Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, Director of the National Economic Council Lael Brainard, and Secretary of Commerce Gina Raimondo on Thursday.
Before the meeting, the administration announced a $140 million investment from the National Science Foundation to launch seven new AI research institutes and that the White House’s Office of Management and Budget would release federal AI policy guidance.
Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability AI will run on Scale AI and Microsoft’s platform at the AI Village at DEFCON 31, one of the world’s largest hacker conventions, to evaluate their AI systems.
After Biden announced his reelection, the Republican National Committee released a dystopian AI-made video.
AI technology will make such political advertising more widespread.
US digital authorities have fallen short of European countries in creating robust deep fake and misinformation laws that corporations must comply with or face heavy fines.
“We don’t see this as a race,” the administration official added.
In February, Biden issued an executive order requiring government entities to remove AI prejudice. In addition, the Biden administration produced an AI Bill of Rights and risk management framework.
Last week, the Federal Trade Commission and the Department of Justice’s Civil Rights Division pledged to use their legal powers to combat AI-related harm.
Tech companies have repeatedly pledged to fight election propaganda, COVID-19 vaccination misinformation, racism, sexism, pornography, child exploitation, and ethnic group hatred.
Research and news demonstrate they failed. For example, a recent Avaaz analysis revealed that only one in five English-language fake news posts on six major social media platforms were marked as misleading or deleted. Posts in other European languages were not detected.
Comment Template