Silicon Valley Startup Inflection AI Raises $1.3 Billion to Challenge OpenAI’s ChatGPT
Silicon Valley-based generative AI company Inflection AI has successfully secured $1.3 billion in funding to compete with OpenAI’s popular AI personal assistant, ChatGPT. The company’s AI assistant, named Pi, aims to provide users with fast, relevant, and helpful information and advice. Inflection AI plans to make Pi available to consumers directly, as well as through an API. Pi was officially launched in May.
While the exact amount of funding raised by OpenAI remains undisclosed, Microsoft’s investment in OpenAI is reported to be as high as $10 billion over the course of several years. Additionally, Inflection’s competitor, Anthropic, has raised a comparable amount of funding.
Investing in Compute Power
The key to success in the AI industry lies in computational power. OpenAI’s ChatGPT is powered by Microsoft’s Azure cloud, while Anthropic utilizes Google’s cloud to train and operate its Claude LLM. In contrast, Inflection AI intends to build its own supercomputer infrastructure to deploy Pi.
Inflection’s ongoing supercomputer project, which recently won the MLPerf benchmark for training GPT-3, is currently a work in progress. Upon completion, the supercomputer will consist of 22,000 Nvidia H100 GPUs, thereby establishing itself as not only the largest AI cluster but also one of the largest computing clusters worldwide.
The Importance of Hardware
Despite being a generative AI software company, Inflection AI has chosen to invest heavily in hardware. The rationale behind this decision lies in the understanding that larger models require greater computational resources. While training computationally-intensive models tailored for scientific purposes may constitute a significant portion of the compute requirements, the scale required for consumer applications drastically increases inference compute needs. Unfortunately, unlike training compute, there are no shortcuts to reduce the compute demands of inference. Consequently, substantial AI systems are necessary to handle the workload.
Considering the estimated cost of 22,000 H100 GPUs alone is approximately $800 million, it becomes evident that the total expenditure for on-premises hardware involves additional expenses such as infrastructure, real estate, and energy costs. Although $800 million may seem significant, recent analysis suggests that running ChatGPT costs approximately $700,000 daily. At this rate, it would take roughly three years to deplete $800 million in funding.
Inflection AI’s LLM, Inflection-1, which serves as the foundation for Pi, belongs to the same class as OpenAI’s GPT-3.5, characterized by its 175 billion parameters. Inflection AI also recognizes Meta’s Llama (60 billion parameters) and Google’s Palm (540 billion parameters) as being in the same compute class as Inflection-1, despite their varying sizes and scopes. While Palm can write code, Inflection-1 does not possess the same capability.
In general, the size of an LLM correlates with its capabilities, such as multilingual support, code generation, reasoning, and math comprehension, as well as its accuracy. While it may hold true that having the largest LLM could lead to victory, the true winner will likely be determined by the company with the most available compute resources. This is why owning and operating a 22,000-GPU cluster, like Inflection AI, holds significant value.
Expensive Compute in Generative AI
As the potential of LLMs continues to be explored, the importance of large computing clusters like Inflection AI’s supercomputer will continue to grow. In the current landscape, the primary cost of deploying generative AI lies in the substantial compute requirements. Running an LLM on a consumer scale demands substantial financial resources.
Unless there is a shift towards more affordable hardware options from established companies like AMD and Intel, or emerging startups, the cost of compute is unlikely to decrease. Consequently, it is expected that billions of dollars will continue to be invested in chatbot companies in the foreseeable future.
Editor Notes
It is fascinating to witness the competition between Inflection AI and OpenAI in the field of generative AI. The immense funding raised by both companies underscores the significance of AI technology in today’s world. With Inflection AI’s ambitious plans to build its own supercomputer infrastructure, it is clear that the race for compute power is a crucial aspect of AI development. Inflection AI’s Pi represents another step forward in the evolution of AI personal assistants, and it will be interesting to see how it competes with OpenAI’s ChatGPT in the market.
For more news and updates on AI advancements, visit GPT News Room.https://gptnewsroom.com
from GPT News Room https://ift.tt/5Gib7ZV
No comments:
Post a Comment