Micron Begins Mass Production Of Memory Chip ‘HBM3E’

918

Micron Technology, the United States’ leading producer of memory chips has began the mass production of its high-bandwidth memory (HBM3E) semiconductors, for generative AI and high-performance computing, beating dominant Korean players Samsung Electronics and SK hynix to the milestone.

Nvidia plans to incorporate this chip into its forthcoming H200 graphic processing units, slated for shipment in the second quarter, superseding the current H100 chip which has significantly bolstered revenue for the company.

The Boise, Idaho-based latecomer will become the first chipmaker to mass-produce the new HBM standard, an unanticipated feat given its modest market share in the memory chip segment.

Micron shares surged 4.02 percent on Tuesday while those of SK hynix plummeted by 4.94 percent to close at 153,800 won.

The move coincides with Samsung Electronics’ announcement of its successful development of HBM3E chips with the industry’s largest capacity of 36 gigabytes.

Also Read: Nvidia launches new gaming chip

The Suwon, Gyeonggi-based Company has already begun sending product samples to its clients, and the chips are slated for mass production by the first half of this year.

HBM3E chips stack twelve 24-gigabit dynamic random access memory (DRAM) chips with peak memory bandwidth of 1.28 terabytes per second. Both aspects have improved by 50 percent compared to its predecessor, eight-stack HBM3.

But the HBM3E chips are the same height as eight-layer ones to meet current package requirements, made possible with the application of advanced thermal compression nonconductive film (TC NCF) technology.

The chipmaker has also lowered the thickness of its NCF material, achieving the industry’s smallest gap between chips at seven micrometers.

“The industry’s AI service providers are increasingly requiring HBM with higher capacity, and our new HBM3E 12H product has been designed to answer that need,” said Bae Yong-cheol, executive vice president of memory product planning at Samsung Electronics.

“This new memory solution forms part of our drive toward developing core technologies for high-stack HBM and providing technological leadership for the high-capacity HBM market in the AI era.”

The new chips When applied in AI services, will be able to increase the average speed of AI training by 34 percent compared to HBM3 products while expanding the number of simultaneous users of inference services by a factor of 11.5.

Source Reuters

Comments are closed.