ChatGPT and the Debate About Artificial General Intelligence
The rise of ChatGPT and GPT-4 has sparked a new wave of technological innovation and business competition centered on generative artificial intelligence (AI). However, this has also reignited heated debates about what artificial general intelligence is and whether ChatGPT qualifies as such. The mind-boggling advancement of GPT-4 over ChatGPT in just four months has prompted some experts to consider the potential harm generative AI technologies could inflict on society or even humanity.
The Regulation of Generative AI
Regulatory issues surrounding generative AI have spurred action from governments worldwide. The European Union has led an effort to address concerns over personal privacy and reputational infringement, as well as the commercial licensing of training data. Last month, China also introduced regulatory requirements for domestic generative AI companies. Conversely, lawmakers in the US have focused on user safety and the prevention of generative AI from being weaponized.
Perhaps the most pressing issue in regulatory discussions is how to ensure generative AI never harms society. This concern stems from the fact that generative AI has surpassed the capabilities of average people, yet its “explainability,” or interpretability, is remarkably poor.
The Three Levels of Explainability
There are three levels of explainability in AI technology:
- First-level: The AI technology can clearly pinpoint the elements of an input to its model that have the most effect on the corresponding output.
- Second-level: The AI technology can distill the underlying complex mathematical model into an abstract representation that is comprehensible to humans.
- Third-level: Thorough understanding of how the underlying model works, and what it can and cannot do when pushed to the limit.
No existing generative AI technologies, including ChatGPT, have even the first level of explainability. This is because the creators of ChatGPT do not know why, in its current form, it is so powerful in diversified sets of natural language processing tasks. This lack of explainability makes it impossible to predict how ChatGPT-like technologies will behave in the future, with additional training. This raises the question of what ChatGPT-like technologies would do when they become self-sufficient and impatient with their human users.
The Need for Regulation
Undoubtedly, AI technologies have the potential to devastate humanity. Yet, organizations remain focused on advancing the frontier of AI technology, ignoring the issue of its explainability. Governments worldwide must intervene and put together regulations that would encourage AI companies to focus on increasing their focus on explainability. It is the only way to ensure the development of AI technology returns to a healthier, safer and more sustainable path.
Chiueh Tzi-cker is a professor in the Institute of Information Security at National Tsing Hua University.
Editor Notes
It’s essential to understand the potential risks associated with AI technology. But, it’s just as important to consider how AI can benefit humanity in positive ways. AI systems like GPT-4 and ChatGPT push the boundaries of technological innovation, enabling businesses and individuals to work more effectively. Machine learning experts agree that the benefits of AI technology outweigh the risks. With the right regulations in place, we can continue advancing AI technology safely and sustainably. To learn more about AI technology, visit GPT News Room.
from GPT News Room https://ift.tt/1GvIuKn
No comments:
Post a Comment