Saturday 20 May 2023

Although Congress Desires to Regulate A.I., the Method Remains Unclear.

OpenAI’s GPT-2, a large-language-model text generator, was not released to the public in 2019 due to concerns about its potential for malicious applications, including misleading news articles and automated production of abusive or faked social media content. Now, four years after the warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law have met to discuss regulations on artificial intelligence. OpenAI has since released GPT-4, added it to a chatbot, and created image generator DALL-E. However, GPT has a tendency to make things up and has the potential to blur the line between reality and invention.

The Senate hearing featured Sam Altman, OpenAI’s C.E.O. who appealed that regulation of A.I. is essential. Altman suggested assisting policymakers so that regulation can balance safety incentives while ensuring accessibility to the technology’s benefits. Senator Richard Blumenthal remarked that A.I. companies should be required to test their systems and disclose known risks. Blumenthal demonstrated the implications of A.I. harm by introducing a recording of himself speaking about regulation, but with words he never uttered, produced by artificial intelligence.

The idea of creating a new government agency tasked with licensing powerful A.I. models was floated by Altman but has the potential to concentrate power in the hands of a few, further eroding the free flow of information and ideas. Additionally, OpenAI and other companies have kept secret their large language model’s trained data, making it impossible to determine their inherent biases or safety.

Regulation of A.I. is essential, and policymakers need to develop regulations that balance safety incentives while ensuring accessibility to the technology’s benefits. However, we need to avoid concentrating power in the hands of a few. We need to hold A.I. companies accountable and ensure that A.I. is safe and fair.

Keywords: OpenAI, large-language-model, GPT-2, GPT-4, DALL-E, artificial intelligence, regulation

Source link



from GPT News Room https://ift.tt/p5sWCqY

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...