2024-09-26
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
it home reported on september 26 that sk hynix announced today that the company is the first in the world to start mass production of 12-layer hbm3e chips, achieving the largest 36gb capacity among existing hbm products; the company will provide this product to customers within the year.
after the first news impact, sk hynix's stock price in south korea rose by more than 8%, and its market value exceeded 120.34 trillion won (it home note: currently about 635.155 billion yuan).
according to reports, sk hynix also stacked 12 3gb dram chips to achieve the same thickness as the existing 8-layer product, while increasing capacity by 50%. to this end, the company made a single dram chip 40% thinner than before and stacked it vertically using through silicon via (tsv) technology.
in addition, sk hynix has also solved the structural problems that arise when thinner chips are stacked more. the company applied its core technology advanced mr-muf process to this product, which improved the heat dissipation performance by 10% compared with the previous generation and enhanced the control of warping problems to ensure stability and reliability.
the company is the only company that has developed and supplied a full range of hbm products to the market, from the first generation of hbm to the fifth generation of hbm (hbm3e) launched globally in 2013. the company is the first in the industry to successfully mass produce 12-layer stacked products, which not only meets the growing needs of artificial intelligence companies, but also further consolidates sk hynix's leadership in the memory market for ai.
sk hynix said that the 12-layer hbm3e has reached the world's highest level in terms of speed, capacity, stability and other aspects required for ai-oriented memory. the 12-layer hbm3e can run at a speed of up to 9.6gbps, and can read 70 billion overall parameters 35 times per second when running the 'llama 3 70b' large language model on a gpu equipped with four hbms.