Tuesday, 24 October 2023

The Register: Scientists advocate for AI regulation to prevent potential future dangers

24 AI Leaders Call for Stronger Regulation of Technology to Prevent Harm

A group of 24 AI experts, including Geoffrey Hinton and Yoshua Bengio, has released an open letter advocating for stronger regulation and safeguards in the field of artificial intelligence (AI). The group argues that while the rapid progress of AI is impressive, it also poses potential risks to society and individuals. The letter states that it is crucial to prioritize the development of AI systems with safe and ethical objectives to avoid the amplification of social injustice and the erosion of social stability.

The authors of the letter emphasize the need for collaboration between tech companies, private funders of AI research, and governments to ensure responsible and safe AI development. They propose that tech companies and private funders allocate at least one-third of their R&D budgets to safety measures. Additionally, they urge governments to establish regulatory frameworks that address AI risks. This could be accomplished through regulations such as model registration, whistleblower protection, incident reporting standards, and monitoring of AI model development and supercomputer usage.

The letter also suggests that governments should have access to AI systems before their deployment to evaluate them for dangerous capabilities. This proactive approach could potentially prevent the deployment of autonomous AI systems that could pose a threat. Furthermore, the authors argue that developers of cutting-edge AI models should be legally accountable for any harms caused by their models if those issues are reasonably foreseeable and preventable.

While the call for stronger regulation and risk management in AI has gained support from many AI luminaries, Yann Lecun, the chief AI scientist at Meta, disagrees with the notion. Lecun asserts that regulating AI research and development would hinder progress and innovation in the field. He believes that open and accessible platforms are essential for AI to reach its full potential.

In a debate with Bengio, Lecun expressed his belief that the concerns of an AI doomsday scenario are exaggerated. He argued that AI models have limitations and are far from being able to threaten humanity. Lecun used the example of self-driving cars, stating that AI models are not capable of training themselves to drive in the same way a human can.

The debate surrounding AI regulation mirrors the early days of the internet when the question of control and regulation arose. Lecun draws a parallel between the internet’s success and its open nature, suggesting that AI should follow a similar path.

The authors of the open letter acknowledge that the current generation of AI may not pose immediate threats. However, they emphasize the importance of anticipating and preparing for potential risks and ensuring responsible development before they materialize.

In conclusion, the open letter from the group of AI leaders highlights the necessity of stronger regulation and safeguards in AI to prevent harm to society and individuals. While there is disagreement among experts regarding the level of regulation needed, the call for collaboration between tech companies, funders, and governments is crucial in ensuring safe and ethical AI development. This proactive approach can help mitigate the potential risks associated with AI and ensure its responsible advancement.

Editor Notes:
It is encouraging to see prominent AI leaders advocating for stronger regulation and safeguards in the field. As AI technology becomes more advanced and prevalent, it is essential to address potential risks and ensure its safe and ethical development. Collaboration between various stakeholders is key to achieving this goal. Governments, tech companies, and private funders must work together to establish regulatory frameworks and allocate resources towards safety measures. By taking a proactive approach, we can shape the future of AI in a way that benefits society while minimizing potential harm. To stay updated on the latest developments in AI, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/ITGPMOu

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...