ChatGPT, a rapidly expanding artificial intelligence program, has been lauded for its capacity to swiftly compose replies to a broad variety of topics, and its implications on national security and education has caught the attention of U.S. policymakers.
ChatGPT reportedly gained 100 million monthly active users within two months after its inception, making it the most popular consumer application in history and a rising regulatory target.
OpenAI, a private business sponsored by Microsoft Corp developed it and made it freely accessible to the public.
Its prevalence has prompted concerns that generative AI such as ChatGPT might be used to propagate misinformation, while educators are concerned that pupils would use it to cheat.
Representative Ted Lieu, a Democrat on the House Science Committee, said in a recent New York Times opinion post that he was pleased by AI and the “wonderful ways technology will continue to benefit humanity,” but “freaked out by unfettered and unregulated A.I.”
Lieu introduced a resolution written by ChatGPT stating that Congress should prioritize AI “to ensure that the development and deployment of AI is done in a manner that is safe, ethical, and respects the rights and privacy of all Americans, and that the benefits of AI are widely distributed and the potential consequences are kept to a minimum.”
Democratic legislators’ aides said that in January, OpenAI CEO Sam Altman visited Capitol Hill and spoke with tech-focused senators including Senators Mark Warner, Ron Wyden, and Richard Blumenthal and Representative Jake Auchincloss.
An aide to Representative Wyden said that the senator questioned Altman on the need to ensure that artificial intelligence did not include any biases that may result in discrimination in the real world, such as housing or employment.
“While Senator Wyden thinks AI has enormous promise to accelerate innovation and research, he is determined to prevent automated systems from automating prejudice in the process,” said Wyden assistant Keith Chu.
According to a second congressional staffer, the conversations centered on the rate of advancement in AI and its potential applications.
According to media sources, ChatGPT has been prohibited at schools in New York and Seattle due to concerns about plagiarism. One congressional staffer said that the majority of the individuals who voiced worry about cheating were educators.
OpenAI said in a statement, “We don’t want ChatGPT to be used for deceptive purposes in schools or anyplace else, thus we’re already creating countermeasures to assist anybody detect text produced by this system.”
In an interview with Time, OpenAI’s chief technical officer, Mira Murati, said that the business welcomes feedback from regulators and governments. She said, “It’s not too early for authorities to get involved.”
Andrew Burt, managing partner of the AI liability-focused legal company BNH.AI, cited national security concerns when adding that he had talked with legislators who are evaluating whether to govern ChatGPT and comparable AI systems, such as Google’s Bard, but he could not divulge their identities.
“The whole value proposition of these kinds of AI systems is that they can produce content at sizes and speeds that people cannot,” he said.
“I anticipate that malevolent actors, non-state actors, and state actors with objectives that are antagonistic to the United States are utilizing these technologies to create potentially incorrect or destructive information.”
When asked how it should be governed, ChatGPT demurred, stating, “As a neutral AI language model, I have no opinion on the precise legislation that may or may not be implemented to regulate AI systems like me.” However, it went on to outline possible areas of concern for authorities, such as data protection, bias and fairness, and response transparency.
Comment Template