Wednesday 7 June 2023

Reaction of Professors to the Proposal of ‘Halting’ AI Research

The Urgent Need for Responsible AI Development: Why Some Experts are Calling for a “Pause”

Generative AI technologies like ChatGPT are advancing at a breakneck pace, leading to concerns from industry leaders about how AI could change, or even end, human life. These concerns have led to calls for a “pause” on AI development, with many experts in higher education urging governing bodies to study the implications of the technology and create frameworks for its responsible use.

The University of Florida recently signed onto the Rome Call for AI Ethics, which calls for technological progress that serves human genius and creativity and not their gradual replacement. My T. Thai, associate director of the Nelms Institute for the Connected World and part of an expert panel at the university’s Herbert Wertheim College of Engineering, emphasizes the need for AI systems to be developed only once their effects on society and humanity can be positive and their safety can be verifiable.

Junfeng Jiao, a professor at the University of Texas at Austin, believes universities should play a bigger role in AI research and development efforts and states the need for more guidance on the training of generative AI and large language modeling. Meanwhile, Shannon French, a professor of ethics at Case Western Reserve University, argues that calls for a “pause” on AI development are a clever way for private tech industry leaders to direct attention towards AI in a panicky sense, urging people to focus on hypothetical threats rather than the real issues with existing AI. She emphasizes that the most important problem with AI is its bias, and that AI is being rushed into use before it is ready.

Paul Root Wolpe, director of the Emory University Center for Ethics, stresses the need for more regulation in the field more generally, emphasizing that every new iteration of AI development is a new opportunity to correct mistakes and solve problems. When leaders call for regulation, we should listen to them, because industry incentives are not to be regulated.

Source link



from GPT News Room https://ift.tt/c1r4upo

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...