Lantron Electronic Logo

OpenAI opts to develop its own chips

2024-02-02 10:16:15

Under the flood of AI brought about by generative AI and big models, many big manufacturers have coincidentally chosen to self-research AI chips. Even if this means setting up a new team and spending a lot of money, from the perspective of long-term development, self-developed chips seem to have become imperative. However, OpenAI, the "initiator" of the AI torrent, or the "initiator" that pushed it to the tip of the wave, is also planning to develop its own chips.

 

With the premise of NVIDIA's supply, why go to the road of self-research?

 

A little attention to OpenAI trends, it must also have heard of the OpenAI at the end of 2023, the departure of the storm. According to reports, during the OpenAI drama, Sam Altman has been secretly planning a multi-billion dollar chip project (allegedly called Tigris). Even its trip to the Middle East last year was to raise funds for this project.

 

Under the premise of the major companies have chosen to batch sweep, last year for AI servers NVIDIA GPUs are in a state of extreme shortage, for example, Meta announced early in the month that it plans to buy another 350,000 pieces of NVIDIA's H100 GPUs. even Microsoft, which provides servers for OpenAI, is facing difficulties in expanding the scale of its AI servers. For this reason Microsoft is also speeding up the development process of internal AI chips, but Microsoft's this chip is bound not to be built solely for OpenAI, but for all Azure customers.

 

Not to mention that OpenAI is faced with ultra-high operation and maintenance costs, net rumours of OpenAI daily server operation and maintenance costs of up to 700,000 U.S. dollars.

In such high server hardware and operation and maintenance costs, OpenAI also carried out a related subscription fee model ChatGPT Plus, the cost of $ 20 per month.

 

But according to statistics, in October last year, ChatGPT Plus subscription users only more than 200,000, such a user scale can not cover the operation and maintenance expenses. What's more, because of the limitations of hardware resources, OpenAI has been limiting the size of the subscription users of ChatGPT Plus, so as to avoid overloading the server.

 

For this reason, OpenAI wants to further reduce costs and increase efficiency, and to ensure that the next generation of GPT models can have enough arithmetic support, it is understandable to embark on the path of self-research. Chips developed by OpenAI will naturally be optimised for the GPT model, which will also facilitate future iterations of the model and hardware.



If you like this article, may wish to continue to pay attention to our website Oh, later will bring more exciting content. If you have product needs, please contact us.