2024-09-26
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
this article is compiled by semiconductor industry horizon (id: icvi ews)
sk hynix announced that it has started mass production of 12h hbm3e chips, achieving the largest 36gb capacity among existing hbm products.
today, south korea's sk hynix announced that it has begun mass production of the world's first 12-layer hbm3e product with a capacity of 36gb, which is the largest capacity of existing hbm to date.
sk hynix claims that the 12-layer hbm3e product meets the world's highest standards in terms of speed, capacity and stability. the company plans to provide mass-produced products to customers within the year.
influenced by this news, sk hynix's stock price soared in the korean stock market on thursday.
sk hynix is the first to achieve mass production of 12-layer hbm
in march this year, sk hynix delivered 8-layer hbm3e products to customers, setting a record in the industry. six months later, sk hynix once again became the first in the industry to achieve mass production of 12-layer hbm3e chips, once again proving its technological superiority.
sk hynix is the only company that has developed and supplied the entire hbm series from the first generation (hbm1) to the fifth generation (hbm3e) since launching the world's first hbm in 2013.
now that sk hynix has achieved the industry’s first mass production of 12-layer hbm3e, it will meet the growing needs of ai companies and continue to maintain its leading position in the ai memory market.
sk hynix president justin kim said, “sk hynix has once again broken through technological limitations in the field of ai memory, demonstrating our industry leadership in the field of ai memory… in order to overcome the challenges of the ai era, we will steadily prepare next-generation memory products and continue to maintain our position as the world’s number one.”
speed, capacity and stability all meet the highest standards
according to the company, the 12-layer hbm3e product meets the world's highest standards in all areas necessary for ai memory, including speed, capacity, and stability.
sk hynix has increased the memory speed to 9.6 gbps, the highest memory speed currently available. if the large language model llama 3 70b is driven by a single gpu equipped with 4 hbm3e products, a total of 70 billion parameters can be read 35 times per second.
sk hynix's 12-layer product has a 50% increase in capacity compared to the previous 8-layer product of the same thickness. to achieve this, the company made each dram chip 40% thinner than before and stacked them vertically using tsv technology.
the company also solved the structural problems caused by stacking thinner chips higher by applying its core technology advanced mr-muf process. this makes the new generation of products have 10% better heat dissipation performance than the previous generation, and ensures product stability and reliability by enhancing warpage control.
how is samsung progressing?
hbm (high bandwidth memory) is a key component of gpus that helps process large amounts of data generated by complex applications, and the vertical chip stacking technology can save space while reducing power consumption.
currently, there are only three major hbm manufacturers: sk hynix, micron technology and samsung electronics. among them, samsung electronics first launched the hbm3e 12h in february this year.
samsung hbm3e 12h supports a maximum bandwidth of 1280gb/s around the clock, and the product capacity has reached 36gb. compared with samsung's 8-layer stacked hbm3 8h, hbm3e 12h has significantly increased bandwidth and capacity by more than 50%.
“as ai service providers in the industry increasingly demand higher capacity hbm, our new hbm3e 12h is designed to meet this need,” said yongcheol bae, executive vice president of the memory product planning team at samsung electronics. “this new memory solution is part of our efforts to develop multi-layer stacked hbm core technology and provide technology leadership for the high-capacity hbm market in the ai era.”
hbm3e 12h uses advanced thermal compression non-conductive film (tc ncf) technology to keep the height of 12-layer and 8-layer stacked products consistent to meet current hbm packaging requirements. as the industry is looking to alleviate the chip bending problem caused by thin sheets, this technology will bring more benefits in higher stacks. samsung has been working to reduce the thickness of its non-conductive film (ncf) material and minimize the gap between chips to 7 microns (µm), while eliminating the gaps between layers. these efforts have increased the vertical density of its hbm3e 12h products by more than 20% compared to its hbm3 8h products.
samsung's advanced thermal compression non-conductive film (tc ncf) technology also improves hbm's thermal performance by allowing different sized bumps to be used between chips. during the chip bonding process, smaller bumps are used in signal transmission areas, while larger bumps are placed in areas where heat dissipation is required. this approach helps improve product yields.
with the exponential growth of ai applications, hbm3e 12h is expected to become the preferred solution for future systems to meet the system's demand for larger storage. with ultra-high performance and ultra-large capacity, hbm3e 12h will help customers manage resources more flexibly while reducing the total cost of ownership (tco) of data centers. compared with hbm3 8h, hbm3e 12h is expected to increase the average speed of ai training by 34% after being installed in ai applications, and the number of inference service users can also increase by more than 11.5 times.
regarding mass production progress, samsung said it has begun providing hbm3e 12h samples to customers and is expected to start large-scale mass production in the second half of this year.