Monday, 1 May 2023

‘AI Pioneer’, formerly associated with Google, leaves the company and causes concern – here’s the reason behind it.

Is the rise of AI causing more harm than good? While technologies like ChatGPT and Google Bard have impressed people with their capabilities, experts in the field have raised concerns about potential negative impacts. Elon Musk and many others in the industry have signed an open letter calling for an AI arms race pause. Now, Geoffrey Hinton, a former Google employee who pioneered the use of neural networks in AI, has left the company after more than a decade to speak out against the rise of AI.

Hinton has two main concerns about AI: misinformation and automation. Despite feeling that Google was a “proper steward” for AI technology, the rise of chatbots and their abusive use has led him to view Google and Microsoft in an AI arms race that is impossible to stop. Hinton worries that the average person still struggles to differentiate between AI-created and human-created content, and that chatbots may take away more tasks in the future.

While some former Google engineers disagree with Hinton and think that Google is behaving in a “safe and responsible manner,” others have expressed concern about AI’s capabilities. Blake Lemoine, another former Google engineer claims that Google’s large language model, LaMDA, which powers Google Bard, was sentient. He was fired soon after making this claim. Lemoine thinks that Google was about to release a version of Bard prior to OpenAI unleashing ChatGPT but pulled the plug partly because of concerns he raised.

However, Microsoft’s chief scientific officer, Eric Horvitz, sits somewhere in the middle between these two positions. While he recently signed an open letter calling for government regulation of AI, rather than a research pause, he doesn’t necessarily think that we need to slam the brakes on the development of AI. Horvitz argues that bad actors using AI is a major concern, but an AI takeover is not a top worry.

Both Hinton and Horvitz agree that bad actors misusing AI is a key concern, with Hinton advocating for stopping generative AI before it becomes a major issue, while Horvitz calls for government regulations and corporate action to work together to ensure a safer future. With NeMo Guardrails, Nvidia has announced new technology to limit the boundaries that AI chatbots can operate within, showing that companies are taking the issue of AI safety seriously.

In conclusion, there’s no unanimity regarding the threat that AI poses or if companies are currently doing enough to address threats that may exist. However, between these different positions, a common thread appears. With new advances in AI technology on the horizon, only time will tell if Hinton’s concerns will be realized, or if optimists like Lemoine and Horvitz are closer to the mark.

Source link



from GPT News Room https://ift.tt/JdVf7HN

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...