Thursday, 29 June 2023

Chief Technology Officer Michael Kagan of Nvidia Interviewed

Nvidia Builds Architecture for the 21st Century Computer

The world of computing is constantly evolving, with technology becoming smaller and more powerful. According to Michael Kagan, the CTO of Nvidia, the 21st century computer is scalable from a smartwatch all the way up to the hyperscale datacentre. Nvidia is at the forefront of building the architecture for this new era of computing, providing everything from silicon and frameworks to tuning applications for optimal execution.

Kagan, who joined Nvidia three years ago through the acquisition of Mellanox Technologies, is responsible for overseeing the architecture of all systems. This includes developing the necessary components and optimizing them for efficient performance on this modern machinery.

The Evolution Beyond Moore’s Law

Moore’s Law, coined by Gordon Moore in 1965, predicts that the semiconductor industry will be able to double the number of transistors on integrated circuits every year. However, this prediction was modified in 1975 to a doubling every two years. While chip manufacturers benefited from this doubling until around 2005, they eventually reached physical limitations that prevented further advancements.

To overcome these limitations, manufacturers found alternative ways to increase computing power. One approach was to increase the number of cores, allowing for parallel processing. Another was to improve communication between chips and processors by utilizing networks instead of shared buses. These innovations led to the creation of accelerators, specialized components that perform tasks rapidly and enhance overall performance.

In the pursuit of increasing computing power, manufacturers began focusing on artificial intelligence (AI) and other emerging applications. AI processing requires a different data processing method than traditional von Neumann architecture. Neural networks, inspired by the human brain, process data by learning and recognizing patterns, allowing for solving complex problems that were previously unattainable.

The Need for a New Paradigm

AI and other advanced applications, such as digital twins, necessitated the development of a new paradigm that could accommodate the growing demand for computing performance. While software development traditionally required minimal computing power, AI demands significant compute resources for training neural networks, but fewer for inference.

Training large AI models, such as ChatGPT, requires the collaborative effort of multiple GPUs working in parallel. This not only requires massive parallel processing but also effective communication between the GPUs. Additionally, a new type of specialized chip called the data processing unit (DPU) became essential in this new era of computing.

Huang’s Law: The Acceleration of Computing

Jensen Huang, Nvidia’s founder and CEO, identified a new trend in GPU-accelerated computing. According to Kagan, GPU-accelerated computing performance doubles every other year, surpassing even the expected rate of improvement. The addition of more and better accelerators, along with advances in algorithms, allows for more sophisticated data processing.

The partitioning of functions between the GPU, CPU, and DPU, interconnected by a network, further enhances computing capabilities. In fact, Nvidia’s acquisition of Mellanox introduced in-network computing, enabling data calculations as data flows through the network.

While Moore’s Law relied on transistor count to drive computing performance, Huang’s Law, based on GPU-accelerated computing, doubles system performance every other year. However, even Huang’s Law may struggle to keep up with the growing demands of AI applications, which require 10 times more computing power each year.

In conclusion, Nvidia is at the forefront of building the architecture for the 21st century computer. With advancements in GPU-accelerated computing and the development of specialized chips like the DPU, computing power continues to increase exponentially. While traditional Moore’s Law reached its physical limits, innovative approaches such as parallel processing and in-network computing have propelled computing capabilities to new heights. However, the demand from AI applications poses new challenges and necessitates continuous innovation to meet evolving computational needs.

Editor Notes:

The evolution of computing power is fascinating, with Nvidia leading the way in developing the architecture for the 21st century computer. The combination of GPU-accelerated computing, specialized chips, and innovative data processing techniques has unlocked new possibilities in AI and other advanced applications. As computing power continues to surge, we can expect further breakthroughs in AI research and the development of cutting-edge technologies. To stay updated on the latest advancements in AI and technology, visit GPT News Room.

**Opinion piece**: The rapid advancement of computing power is reshaping industries and paving the way for unprecedented innovation. Nvidia’s commitment to pushing the boundaries of what’s possible exemplifies the spirit of technological progress. As we navigate the complexities of an AI-driven world, it’s reassuring to see companies like Nvidia driving the development of robust architectures and specialized chips. The fusion of hardware and software expertise is revolutionizing the computing landscape, and it’s exciting to witness these transformative changes firsthand. With each breakthrough, we inch closer to a future where technology seamlessly integrates into our daily lives, enabling remarkable achievements and unlocking new realms of human potential.

Source link



from GPT News Room https://ift.tt/GNHbVfB

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...