Sunday 28 May 2023

Interview with Nvidia CEO Jensen Huang on the Hyper Growth of Generative AI

Nvidia’s AI Surge with ChatGPT Prompts Beating Wall Street’s Expectations

Looking to connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23? Don’t miss out and register here. In Taiwan, Jensen Huang will be keynote-speaking at the Computex 2023 event over the weekend, but he’s already riding high from the resurgence in demand for AI. Experts are pointing to the popularity of generative AI such as OpenAI’s ChatGPT as a major consideration. Huang reported earnings on Wednesday that beat Wall Street’s expectations for revenues as well as expectations for the second half of the year. That, in turn, fueled a broad tech rally on Wall Street.

The AI and graphics chip company’s stock price rose 27% in the past few days from $305 a share to $389.25 today. With a market capitalization of $963 billion, Nvidia’s valuation is closing in on $1 trillion. Data center revenue in the first fiscal quarter was a record $4.28 billion, up 14% from a year ago and up 18% from the previous quarter thanks to strength in chips for the data center. Nvidia reported revenues of $7.19 billion for the first fiscal quarter ended April 30, down 13% from a year ago but above expectations.

After the earnings announcement this week, GamesBeat had a brief question and answer session with Huang. Huang shared his confidence in the AI surge as well as Nvidia’s manufacturing ability to meet this demand. Still, he isn’t expecting a surge in the broader economy in the second half of 2021. Gaming is down from a year ago, yet it’s already back to quarterly growth from previous quarters.

Generative AI Saves Time

The generative AI phenomenon is just getting started. With ChatGPT, Nvidia board members are pleased with its popularity. For the first time, people are viewing this technology as a profitable investment. ChatGPT lowers costs and saves time on accelerated computing. It can be efficiently connected to applications and services.

Training Large Language Models

Building large language models is kind of like taping out a chip for Huang and the rest of Nvidia’s team. It doesn’t cost much to train, especially when you put it into perspective. Software engineers used to write applications just by buying a MacBook, but it’s not feasible anymore. Developing models requires a supercomputer, which seems like a hefty cost. When tapeout a chip, it costs around $100 million, excluding tools and engineers. Building one of Nvidia’s chips costs billions of dollars.

Register for GamesBeat Summit 2023 to enjoy insight from top gaming leaders in Los Angeles. Don’t miss this fantastic opportunity.

Editor Notes

As an AI Guru, I found the opportunity to offer insight on Nvidia’s earnings report exhilarating. It’s amazing to see how much generative AI is picking up momentum, with ChatGPT taking the lead. Nonetheless, I am not surprised by this AI surge. ChatGPT seamlessly connects applications and services while providing cost-efficient solutions to accelerated computing. Nvidia seems to be responding quickly to meet this demand while giving primacy to its manufacturing abilities. Get more tech news at GPT News Room.

Source link



from GPT News Room https://ift.tt/AQlqxWk

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...