AI chips in 2025: Smaller, faster, smarter

There are two ways to make semiconductors more powerful: scaling technology to shrink transistors in nanometer-level sizes to fit more of them onto a chip and introducing innovative packaging techniques to further enhance data transfer speeds and efficiency.

Jo He-rim

Jo He-rim

The Korea Herald

news-p.v1.20250201.31d0359f2b1d4666ad141785aa05a56e_P1.jpg

Samsung Electronics' chip fabrication plant in Pyeongtaek, Gyeonggi Province. PHOTO: SAMSUNG ELECTRONICS/THE KOREA HERALD

February 3, 2025

SEOUL – The rapid expansion of artificial intelligence is reshaping the global semiconductors industry, driving an unprecedented surge in demand for faster, smaller and more energy-efficient chips.

This year, the global semiconductors market is expected to grow by about 15 percent, backed by the AI boom, according to market tracker IDC.

To meet growing demand for high-end logic process chips and high-bandwidth memory capable of processing massive data volumes in AI applications, chipmakers will be focusing on cutting-edge process nodes and advanced chip packaging technologies in 2025, pushing the limits of miniaturization and efficiency.

“As AI continues to drive demand for high-end logic process chips and increases the penetration rate of high-priced high bandwidth memory (HBM), the overall semiconductor market is expected to have double-digit growth in 2025,” said Galen Zeng, senior research manager at IDC Asia-Pacific.

“The semiconductor supply chain — spanning design, manufacturing, testing and advanced packaging — will create a new wave of growth opportunities under the cooperation between the upstream and downstream industries.”

There are basically two ways to make semiconductors more powerful: scaling technology to shrink transistors in nanometer-level sizes to fit more of them onto a chip and introducing innovative packaging techniques to further enhance data transfer speeds and efficiency.

Ultrafine processes

In 2025, the competition for commercializing cutting-edge 2-nanometer technology will take center stage, with major wafer makers like Samsung Electronics, Taiwan Semiconductor Manufacturing Co. and Intel entering the race for mass production with massive investments.

Contract manufacturer TSMC is leading in the race, having begun trial production of its 2nm process using the Gate-All-Around transistor architecture last April. The company aims for mass production in the second half of this year. Initial yield rates for the trial production are reportedly around 60 percent.

Samsung Electronics has also set its timeline for 2nm mass production in the second half of this year, after starting trial production in the first half. Having missed the chance to gain a lead in the 3nm process, Samsung’s priority will be securing meaningful yields for the 2nm process.

The company was the industry’s first to mass produce 3nm chips with its novel Gate-All-Around transistor architecture in 2022, but gave way to TSMC after it failed to secure stable yields.

“We were the first to introduce the GAA technology but we still have many shortcomings in commercialization,” Han Jin-man, head of Samsung’s foundry business, said in his inaugural message in December. “We should break the cycle where we compete again for the next process node after losing ground in the previous one (the 3nm process).”

Intel and Rapidus are also joining the race for next-generation process nodes. Rapidus, a Japanese government-backed foundry startup, plans to deliver 2nm prototypes to Broadcom in June. Meanwhile, Intel is looking to jump ahead by focusing on the 1.8 nm process this year, targeting mass production in 2026.

Advanced chip packaging

Advanced packaging technology is another key focus area for chip giants this year. Also known as back-end wafer processing, it is rising as a solution to overcome physical limitations of chip scaling for better performance. By stacking chips vertically through 3D integration and using innovative bonding techniques to reduce thickness and heat, companies aim to improve processing power and energy efficiency.

“Those who dominate packaging are expected to dominate the semiconductor market in the future,” said Lee Kang-wook, head of package development at SK hynix. “It will determine companies’ future survival.”

SK hynix is investing billions of dollars in building an advanced chip packaging facility in the US, while Samsung recently announced plans to expand the back-end process plants in Korea for its cutting-edge chips, including the popular HBM chips.

TSMC is doubling down on investment in chip packaging technologies. The company plans to double the number of wafers using the Chip on Wafer on Substrate technology that horizontally connects different chips on a substrate. An increasing portion of Nvidia’s graphics processing units are generated by TSMC using CoWoS technology.

Samsung Electronics and SK hynix, the world’s top two memory chip-makers, are adopting different chip packaging technologies to better stack their DRAM chips and produce the lucrative high bandwidth memory chips, a key component used to enhance the AI processing performance of GPUs.

SK hynix, the dominating player in the HBM market, plans to implement both hybrid bonding packaging technology and with its advanced MR-MUF in its sixth-generation HBM4 chip, set for mass production in the second half of this year.

Advanced MR-MUF heats and interconnects all the vertically stacked chips in HBM products at once and then fills space between chips with a liquid epoxy molding compound. Hybrid bonding is a technology that connects chips directly without bumps between them, reducing the overall thickness of the chip and enabling higher-layer stacking.

Samsung is also reviewing the introduction of hybrid bonding to its upcoming HBM4 products, along with its original method of TC-NCF, which applies a film-type material after each chip is stacked and then melts the substance under heat and pressure to glue the chips together.

Next-generation products

Korean chip giants are also ramping up efforts to gain a leading edge in new product areas that will become the next competitive battlefield after HBM: Compute Express Link, or CXL, and Processing-In-Memory chips.

The nearer future is in CXL. It is a next-generation interface that maintains memory coherency between CPU memory space and memory attached devices. This technology is viewed as key infrastructure to enhance efficiency and lower costs at AI data centers handling massive volume of data, enabling resource-sharing for a higher performance without physically expanding servers.

Samsung, having developed the industry’s first DRAM based on CXL 1.1 technology, is set to mass produce CMM-D incorporating next-generation CXL 2.0 technology. The company has forecast the market to bloom in 2027 and 2028.

SK hynix is also eyeing to secure an early lead in the game. At CES 2025, the world’s largest tech show held in Las Vegas last month, the chipmaker exhibited not only its most popular HBM chips, but also next-generation products including CXL and PIM products it developed.

“We are showcasing not only our key AI products such as HBM and enterprise SSDs, but also next-generation AI memory chips and solutions optimized for on-device AI. We will widely appeal our future technology prowess, reinforcing our brand as a full-stack AI memory provider,” said Kim Ju-sun, chief marketing officer of AI infrastructure.

PIM is the next technology in line. This technology integrates a processor with Random Access Memory on a single memory module, allowing the chip to both store and process data in the same place.

scroll to top