Monday, 24 April 2023

Is it time for the world to pause AI? Speakers of Future State 2023 offer their insights.

The Debate Over AI: To Pause or Not to Pause

The term ‘AI’ has been buzzing around the tech and creative industries, with the emergence of AI art generators and the use of ChatGPT in Bing search and Office software by Microsoft. However, the rise of AI has created concerns for society, including the possibility of AI propaganda and job loss, and the fear that society may lose control. Recently, the Elon Musk-funded Future of Life Institute drafted a letter calling for a six-month pause on developing AI systems more powerful than GPT-4, with the aim of developing safety protocols. In response to this issue, we have asked four top innovators and business minds to share their thoughts on this issue.

The Concerns

According to Dr Jonnie Penn, a professor of AI ethics and society at Cambridge University, there is a lot of misunderstanding surrounding large language models such as ChatGPT. He believes that the use of these models will pollute everyday language in ways that are difficult to spot and will cost money to fix. However, he also believes that such outcomes are avoidable through strict regulation. Penn argues that the modern tech industry can no longer regulate itself and instead proposes turning to community leaders, labor leaders, and market authorities for guidance.

Regulating AI

On the other hand, Constantine Gavryrok, global director of digital experience design at Adidas, is not sold on the idea of pausing AI development. He believes that the industry needs to regulate the development of AI. Gavryrok sides with the work on legislation by the European Union and proposed Artificial Intelligence Act, which mandates various development and use requirements. This legislation focuses on strengthening rules around data, quality, transparency, human oversight, and accountability. It also aims to address ethical questions and implementation challenges in various sectors, such as healthcare, education, finance, and energy.

Expert Opinions

Danielle Krettek, founder of Google Empathy Lab, proposes a holistic approach to better integrate human concerns into technology development. Meanwhile, Sam Conniff, founder of The Uncertainty Experts, argues that the issue is not about AI but instead about market incentives and government regulations.

Closing Notes

While the debate rages on, we must ensure that AI development is regulated and held accountable for its impact on society. As with all new technology, AI has the potential for both positive and negative outcomes. We have a responsibility to create an ethical framework for AI development that promotes innovation while protecting the public.

Source link



from GPT News Room https://ift.tt/Um1lYIV

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...