Intel, TSMC Tout SRAM Breakthroughs At 2nm
But this is completely overshadowed by reports that Intel is a takeover target.
By Mark LaPedus
At this week’s 2025 International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel and TSMC separately presented papers that provided more details about their next-generation processes, including new breakthroughs in the SRAM arena.
But the papers were largely overshadowed by ongoing reports that Intel is a possible takeover target with TSMC as a potential investor, creating some anxiety at Intel, if not the entire semiconductor industry.
Nonetheless, in one ISSCC paper, Intel described a high-density SRAM design built around its next-generation 18A process. Slated for release later this year, Intel’s 18A process is critical for the company’s future.
Then, TSMC described an SRAM design based on its upcoming 2nm process. TSMC, the world’s largest foundry vendor, is expected to release its new 2nm process later this year.
SRAM is a high-speed memory chip type. In one application, SRAM is used to develop a cache memory unit in chip designs. Basically, SRAM-based cache memory is used to store data, which can be quickly accessed by a processor in a system. But the SRAM device itself takes up too much space in chip designs. And for years, it has been difficult to scale the SRAM, which in turn impacts the size and cost of high-performance chips.
Based on their papers at ISSCC, Intel and TSMC have separately found ways to reduce the size, or scale, the SRAM. This could knock down at least one roadblock in chip design, paving the way towards new and more cost-effective devices in the future. But there are still an assortment of other challenges to develop future chips.
Besides Intel and TSMC, others are also working on 2nm processes. Samsung is expected to ship its 2nm process later this year. And Rapidus, a new foundry startup based in Japan, is also developing a 2nm technology, which is supposed to appear in 2027.
Nonetheless, the ISSCC papers from Intel and TSMC would normally generate a buzz. But they were overshadowed by ongoing reports about Intel and its uncertain future. In the latest report, Broadcom is looking at buying the design part of Intel, according to the Wall Street Journal. And ironically, TSMC is mulling over plans to invest in Intel’s fabs, according to the report.
“Intel is looking at multiple options,” said Mark Webb, a principal/consultant at MKW Ventures Consulting. “Many are complex and could fall through.”
There has been no official announcement here. Many are skeptical that these deals will transpire. But it leaves a cloud hanging over Intel, a troubled chip giant that has seen a wave of losses, layoffs and product setbacks.
System architecture and chip scaling
Meanwhile, PCs, smartphones and servers have the same basic architecture. A system incorporates a board. A processor, memory chips and storage reside on the board.
The processor is used to process data. In some cases, SRAM-based cache memory, or L1 cache, is integrated into the processor for fast data access. A system could also have L2 and L3 cache memory, which are off-chip or external devices that store data.
DRAM, another memory chip type, handles the main memory functions in systems. And solid-state drives (SSDs) or a hard disk drive handles the data storage functions.
In operation, when the processor needs data, it first checks the cache. “If the data is there, the CPU can access it quickly. If not, it must fetch the data from the slower main memory,” according to GeeksforGeeks, a technology site.
This architecture works to some degree or another, but there are some challenges here. Consumers want faster and more capable PCs and smartphones. And amid the AI boom, data center operators want faster servers. Thus, suppliers of these systems are under pressure to develop new products using new and faster chips with lower power.
To meet these and other demanding requirements, chip suppliers must quickly respond and develop new devices. This isn’t a simple process. In the semiconductor flow, a company designs a chip line using specialized software. Then, a chipmaker manufactures the chip line based on that design in a large facility called a fab. In the fab, chipmakers produce a chip line using an assortment of equipment
At one time, semiconductor manufacturers developed a new process every 18 to 24 months. Then, they produced new and faster chips based on that process. The overall goal was (and still is) to shrink select feature sizes of a transistor in a chip design by 0.7x at each generation.
Transistors are one of the key building blocks in chips. These are tiny structures that act like electronic switches in devices. An advanced chip may have billions of tiny transistors in the same device.
For years, chipmakers managed to shrink the transistor at each new generation. This in turn enabled new and faster chips at each turn. In recent times, though, the chip-manufacturing process has become more challenging and expensive. The most advanced chips may undergo 1,000 process steps or more in a fab. Many of those steps are complex and expensive.
This complexity has negatively impacted chip manufacturing in more ways than one. Now, semiconductor manufacturers develop a new process every 24 to 36 months. Plus, the price/performance benefits for a new process are diminishing at each turn. And moreover, transistor scaling has slowed.
Take the SRAM for example. For years, the industry has struggled to reduce the size, or scale, this device. In 2022, SRAM scaling stalled at the so-called 3nm node. At the time, chip vendors were still developing and shipping new devices. But a slowdown in SRAM scaling impacted the die sizes and costs of various high-performance chips, such as GPUs, processors and others.
In 2022, AMD took some steps to solve some of the problems by going vertical. Basically, AMD bonded a cache module on top of a processor, which saved space and increased the amount of L3 cache in a system. AMD calls this technology 3D V-Cache. Today, AMD is using 3D V-Cache for various processor lines.
TSMC
Now, TSMC is taking steps to solve the problem. At present, TSMC’s most advanced process is based on a 3nm technology. The company’s 3nm process incorporates tiny transistor structures called finFETs. FinFets are used to develop high-performance chips.
But beyond the 3nm node, the workhorse finFET transistor has reached its physical limits. So, starting at the 2nm node (N2) in 2025, TSMC will migrate to a new transistor type called the nanosheet FET. It’s also known as a gate-all-around (GAA) transistor. Nanosheet FETs provide better performance than finFETs, but they are harder and more expensive to make in the fab.
Still, TSMC’s 2nm process with GAA is an important technology for the semiconductor industry. In many ways, the process gets the industry back on track in traditional transistor scaling. “N2 delivers a full node benefit from the previous 3nm node in offering a 15% speed gain or a 30% power reduction with a >1.15x chip density increase,” said Geoffrey Yeap, vice president of advanced R&D technology at TSMC, in a recent paper at IEDM.
Then, after stalling at the 3nm node, SRAM scaling is back at the 2nm node, at least to some degree. In the IEDM paper, TSMC discussed the SRAM design for its 2nm process. At this week’s ISSCC, TSMC provided more details about it.
TSMC’s 2nm-based SRAM macro has a capacity of 580kb (4096×145) using cells with a size of 0.021μm2. The overall SRAM density is improved by 10% compared to the previous node, resulting in a 38.1Mb/mm2 density.
For the 2nm SRAM design, TSMC minimized the periphery while maximizing the bit cell array size. “This is achieved by increasing the number of bit cells per BL (bitline), as the 2nm nanosheet technology improves the cell’s on-to-off current ratio. This advancement allows for a 2X increase in the maximum BL loading compared to the previous technology,” said Tsung-Yung Jonathan Chang from TSMC, in the ISSCC paper. Others contributed to the work.
With finFETs, the maximum number of cells per BL is limited to 256. “In contrast, the 2nm nanosheet technology allows an increase to 512 cells per BL due to the improved on-current to off-current ratio of the bit cell. This enhancement significantly boosts the cell efficiency of the SRAM macro. Additionally, by increasing the BL capacity to 512 cells and adopting the flying BL (FBL) architecture, the array efficiency improved,” Chang said.
TSMC’s new SRAM architecture features two banks--a 512-row top bank and a 512-row bottom bank. The two banks are connected, creating a 1024 pseudo-row architecture.
This in turn boosts the density of the SRAM design, but it also presents some challenges. There is a significant increase in bitline resistance and capacitance. “To address these challenges, we propose placing the write assist and BL pre-charge blocks at the far side of the array. This enhances the writability and pre-charge strength for the far end cells,” Chang said.
Intel
Intel, meanwhile, is moving full speed ahead with its 18A process, which is arguably a 1.8nm technology. Intel’s 18A process combines a GAA transistor architecture with a backside power delivery technology. Intel refers to its GAA transistor as the RibbonFET. RibbonFETs and nanosheet FETs are basically the same thing. Intel refers to its backside power delivery technology as the PowerVia.
Intel’s 18A is an important technology for the company. Up until a decade ago, Intel was the technology leader. Then, around 2017, Intel stumbled and fell behind. Intel hopes 18A will help bring the company back into a leadership position in process technology.
But it’s unclear if Intel will ever regain its leadership position again. “The introduction of 18A will not make Intel cost competitive in 2025 or 2026 due volumes and ramp costs,” said MKW Ventures’ Webb. “Intel does not have the volume with internal and external foundry work to be cost competitive at this time. It is not clear when they can reach this level of volume if ever.’’
Meanwhile, Intel’s ISSCC paper described a high-current (HCC) and high-density (HDC) 6T SRAM architecture implemented in a 18A-based RibbonFET and a PowerVia technology. “This work achieves an SRAM bitcell area of 0.023μm2 for HCC and 0.021μm2 for HDC with 0.77x and 0.88x area scaling compared to finFET-based designs,” said Xiaofei Wang of Intel in the ISSCC paper. Others contributed to the work.
In finFETs, the fin width is quantized and determined by the number of fins. In GAA, though, the width of nanosheets or ribbons can vary, depending on the application. “The pull-up (PU), pass-gate (PG) and pull-down (PD) transistors in a RibbonFET 6T bitcell can be of arbitrary width, similar to planar transistors, compared to finFETs where the device sizing and ratio must be quantized. In addition, the width of the nanoribbon between two adjacent transistors can be sized differently through a ribbon jog, which allows the PG and PD transistors to be sized differently,” Wang said.
“It provides an important knob for optimizing an SRAM bitcell for lower minimum operation voltage (VMIN). A low PG:PD ratio improves the read static-noise margin but degrades the write margin. The optimum PG:PD ratio achieves the lowest VMIN, between the read and write path. The RibbonFET technology allows for both the HDC and HCC bitcells to be designed to achieve a competitive VMIN, without resorting to the wordline underdrive (WLUD) read-assist technique, and it results in additional read-performance improvements without the wordline being underdriven,” he said.
Another innovation is Intel’s backside power-delivery network (BS-PDN). “BS-PDN with PowerVia uses a backside metal-stack with less resistance, while also relaxing frontside-metal congestion and metal pitch requirements. Integrating either VSS or VCC PowerVias into the memory bitcell results in a significant cell-area increase,” Wang said.
Samsung
Samsung is also a player at the leading edge. In 2022, Samsung began shipping chips based on nanosheet FETs at the 3nm node. Samsung took the lead in the nanosheet market. But the company continues to struggle with its yields here.
Samsung hopes to get back on track and ship its 2nm nanosheet technology in 2025. Samsung refers to its 2nm technology as SF2. “For GAA, Samsung Foundry will provide a highly advanced SF2 technology that offers significant improvements in performance, power, and area over SF3 GAA,” according to Samsung.
Rapidus
Japanese foundry startup Rapidus has some ambitious plans. The company’s first technology is a 2nm GAA process, which is due out in 2027. Rapidus’ technology partner is IBM.
That seems like a far-fetched plan, however. At the same time, the thought that Intel could be on the block was inconceivable several years ago. So are the Trump administration’s efforts to get TSMC to invest in Intel’s fab. And so are the proposed (and otherwise ridicules) chip tariffs.
Nothing is for certain in the semiconductor industry these days.