Sunday 14 May 2023

“The Dilemma of AI Ethics in Relation to Google’s Collaboration with Geoffrey Hinton”

The Dangers of AI: Is Geoffrey Hinton a Warrior for the Cause or Hypocrite for the Ages?

Talk of AI dangers, whether real or imagined, has become an all-consuming frenzy, fuelled to a significant degree by the explosion of generative chatbots. But when scrutinising the critics, it’s worth bearing in mind their motivations, especially in the case of Geoffrey Hinton, often referred to as the “Godfather of AI”. Hinton belongs to the “connectionist” school of AI thinking, which is at odds with the “symbolists”. The former relates to neural networks and human behaviour modelling, while the latter considers AI as a machine that follows specific rules and symbols.

Three years ago, John Thornhill wrote in the Financial Times that “as computers became more powerful, data sets exploded in size, and algorithms became more sophisticated, deep learning researchers, such as Hinton, were able to produce ever more impressive results that could no longer be ignored by the mainstream AI community.”

In 2012, Hinton helped develop a self-training neural network that accurately identifies objects in pictures. This project undermines any claims he makes to being a crusader for the cause, especially given that he is one of the main benefactors of the AI industry. Hinton has earned truckloads of money from the likes of Google, Facebook, Amazon, and Microsoft. He left Google in 2023, which got the speculation mills whirring.

Was he using his departure as a way of criticising the very same company that he helped build? If so, it is a tad hypocritical. After all, he is one of the pioneers of generative AI. A group of AI researchers had penned an open letter a month before his departure, calling for a six-month pause on the development of large-scale AI projects. They asked whether technology leaders should be allowed to make such decisions and whether nonhuman minds should replace us. But it is hard to trust people who had played a considerable part in the creation of the technology that they now criticise.

Hinton is seeking to promote himself by mildly condemning the very idea he helped create. He suggests that the idea of machines becoming smarter than humans is no longer an idea he considers far-fetched. Is this the case of Hinton being a warrior for the cause or a hypocrite?

Hinton’s conviction that humans might lose control of AI is ordinary. He departs from Google to share his concerns about the dangers of AI everywhere without considering how his warnings may affect Google. He was still commended for his contributions by Google, which promised to continue to “learn to understand emerging risks while also innovating boldly”.

Researchers are, however, looking to normalise concerns over AI safety without having to quit their jobs. Hinton himself seems to lack sophistication in his philosophical debates on AI, which are insufficiently developed to keep him up at night. He presented the standard excuse that, if he had not done it, somebody else would have.

It is indeed hard to determine how to prevent unethical people from using AI in unethical ways. As with any powerful tool, the risks define the rewards. While we can only hope that the benefits outweigh the risks, one thing is certain, we need to be careful and vigilant when it comes to AI.

Editor’s Notes: It is believed that AI could revolutionise the world as we know it, but what will happen when the technology reaches the limits of human control? The conversation surrounding AI and its role in society has been increasingly controversial in recent years, with experts claiming that we need to be wary of machines becoming smarter than humans and taking over their jobs. While we should be excited about the potential of AI, we need to ensure that we balance benefits and risks. It is essential that we promote open discussion around AI and its implications so that we can maximise its potential while safeguarding against potential threats. For more GPT News, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/Om1fUwH

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...