The U.S. has started looking into potential regulations for A.I., like ChatGPT. As doubts arise regarding A.I.’s influence on national security and education, the Biden administration requested public input on accountability standards for A.I. systems on Tuesday.
ChatGPT, an A.I. software that has become the fastest-growing consumer application in history with over 100 million monthly active users, has caught U.S. legislators’ notice for its capacity to produce responses to a wide range of inquiries instantly.
The National Telecommunications and Information Administration, a Commerce Department office that advises the White House on telecommunications and information policy, wants comments on an A.I. “accountability mechanism” due to “increasing regulatory interest.”
The agency seeks ways to ensure “that A.I. systems are lawful, effective, ethical, safe, and otherwise trustworthy.”
“Responsible A.I. systems might deliver significant advantages if we address their possible repercussions and drawbacks. “Companies and consumers must trust these technologies to maximize their potential,” said NTIA Administrator Alan Davidson.
This week, President Joseph Biden questioned whether A.I. is hazardous. “Tech companies should ensure their goods are secure before releasing them,” he stated.
California-based OpenAI and Microsoft Corp. fund ChatGPT, which has impressed some users with speedy replies and upset others with mistakes (MSFT.O).
The Biden Administration’s “cohesive and comprehensive federal government strategy to AI-related dangers and potential” would be informed by NTIA’s report on “efforts to guarantee A.I. systems perform as advertised – and without causing harm.”
The Center for Artificial Intelligence and Digital Policy requested the U.S. Federal Trade Commission to ban OpenAI from releasing new commercial GPT-4 because it was “biased, fraudulent, and a concern to privacy and public safety.”
Comment Template