AI and HPC Demand Driven, Demand for HBM Capacity Estimated to Increase Nearly 60% Annually by 2023
High Bandwidth Memory (HBM) has emerged as a solution to the problem of memory transfer rate being limited by the bandwidth of DDR SDRAM for high-speed computing, and its revolutionary transfer efficiency is the key to giving full play to the performance of core computing components. According to TrendForce, HBM has become the mainstream for high-end AI server GPUs, and it is estimated that the global demand for HBM will increase by nearly 60% annually to 290 million gigabytes (GB) in 2023, and by 2024, it will grow by another 30%.
It is estimated that by 2025, if the world takes 5 super-large AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products of Midjourney, and 80 small-sized AIGC products, the computing resources required for the above will be at least 145,600 to 233,700 NVIDIA A100 GPUs, and together with the emerging applications, such as supercomputers, 8K audio/video streaming, and AR/VR, etc., it will also simultaneously increase the load of the cloud computing system, which will show a high demand for high-speed computing.
As HBM has higher bandwidth and lower power consumption than DDR SDRAM, it is undoubtedly the best solution for building high-speed computing platforms. The reason for this can be seen in the release of DDR4 SDRAM and DDR5 SDRAM in 2014 and 2020, respectively, which have only a twofold difference in bandwidth. The power consumption of both DDR5 and future DDR6 will increase simultaneously, which is bound to slow down the performance of the computing system. Taking HBM3 and DDR5 as an example, the bandwidth of the former is 15 times that of the latter, and the total bandwidth can be increased by increasing the number of particles in the stack. In addition, HBM can replace a portion of GDDR SDRAM or DDR SDRAM, thereby controlling power consumption more effectively.
At present, mainly by equipped with NVIDIA A100, H100, AMD MI300, as well as large CSP industry, such as Google, AWS and other self-developed ASIC AI server growth demand is relatively strong, 2023 AI server shipments (including equipped with GPUs, FPGAs, ASICs, etc.) shipments are estimated to be nearly 1.2 million units, an annual growth rate of nearly 38%, the same time, AI chip shipments are rising, and is expected to grow by more than 50 percent.
If you like this article, may wish to continue to pay attention to our website Oh, later will bring more exciting content. If you have product needs, please contact us