Geoffrey Hinton, arguably the “Godfather of AI,” made a surprising move on May Day by resigning from Google. Hinton, who has dedicated his life to advancing AI technology, now wants to speak freely about the risks of artificial intelligence. This decision is both surprising and unsurprising given his growing concerns about AI, particularly generative AI based on deep learning neural networks, which may displace a large swath of the workforce. Hinton believes that we are edging closer to the day when machines will be more intelligent than humans.
This concern is not limited to Hinton. The World Economic Forum predicts that AI will disrupt 25% of jobs over the next five years. In contrast, generative AI could spark a new beginning of symbiotic intelligence, where humans and machines work together to create a renaissance of possibility and abundance. However, the rapid growth of generative AI and its ability to produce human-quality content in text, video, and images may also lead to the spread of misinformation and disinformation. Bad actors could use this capability to manipulate the masses, causing people to no longer know what is true.
Hinton’s decision to speak openly about the dangers of AI without considering how this impacts Google or any other corporation pursuing commercial AI development is significant. This move could potentially lead to regulations and governance practices to prepare companies, governments, and societies for the approaching threat. The discussion about artificial general intelligence (AGI) is another topic of concern. AI systems in use today primarily excel in specific, narrow tasks, but AGI possesses human-like cognitive abilities and would perform a wide range of human-level tasks across different domains.
AGI has been the mission of OpenAI, DeepMind, and others. Generative AI applications, such as ChatGPT, based on Transformer neural networks, are advancing timeline predictions about AGI. These models possess the remarkable ability to exhibit emergent behaviors, which means they can exhibit novel, intricate, and unexpected behaviors. The developers cannot fully explain just how or why these behaviors develop, but they emerge from large-scale data, the transformer architecture, and the powerful pattern recognition capabilities the models develop.
These advances are causing timelines to speed up, creating a sense of urgency. Hinton believes that AGI could be achieved in 20 years or less. Early evidence of this capability can be seen with the nascent AutoGPT, an open-source recursive AI agent that can autonomously use the results it generates to complete complex tasks. However, open source code can also be exploited by anyone, leading to security risks and potentially bad outcomes.
To address these concerns, top executives will gather in San Francisco on July 11-12 to share how they have integrated and optimized AI investments for success and avoided common pitfalls. In this rapidly changing technological landscape, it is crucial to stay informed and understand the potential risks and benefits of AI development.
Source link
from GPT News Room https://ift.tt/tQLh6Se
No comments:
Post a Comment