On Wednesday, Feb. 28, cp9829.com will be upgraded between 6 p.m. - 12 a.m. PT. During this upgrade, the site may not behave as expected and pages may not load correctly. Thank you in advance for your patience.

Flexible memory capacity expansion for data intensive workloads

Memory capacity expansion that optimizes cost and performance balancing compute and memory resources intelligently.

Ability to scale servers with high-capacity CXL standards-based memory

CZ120 memory expansion module using Compute Express Link™ (CXL) enables server OEMs to scale, integrate and expand memory capacity for a multitude of application workloads.
+

Optimized performance beyond the direct-attach memory channels

Flexibility to compose servers with higher memory capacity and low latency to meet application workload demands with up to 24%1 greater memory bandwidth per core versus RDIMM only.

1. MLC bandwidth using 12-channel 4800MT/s RDIMM + 4x256GB CZ120 vs. RDIMM only.

+

Lower total cost of ownership (TCO)

Greater utilization of compute and memory resources for bounded applications reducing CapEx and OpEx.
+

CXL Memory Expansion: A Closer Look on Actual Platform

This whitepaper examines the rationale for CXL-based memory and demonstrates with AMD the value of CXL based memory. The paper details a number of performance tests completed using Micron memory expansion based on CXL and the AMD EPYC™ 9754 CPU and details the results.

Read whitepaper >

+
Cern CXL

Using HPC to solve the world’s largest challenges

Micron is helping CERN transform extraordinary amounts of data into insight for researcher exploring the origins of the universe. See how Micron is memory expansion modules based on CXL™ are providing capacity, flexibility, and bandwidth for seamless access to buffered data from multiple processors and compute accelerators.

Read blog >

+

Memory Lakes: The evolving landscape of memory

Micron looks at the future of memory – shared memory lakes – in the data center and high-performance computing (HPC) environments. As data grows, avoiding costly data movement becomes ever more critical. The new CXL 3.1 standard allows for even greater efficiency by leveraging shared memory

Read blog >

+

Compute Express Link (CXL) in the data center

Ryan Baxter, Sr. Director Data Center Segment, and Eric Caward, Sr. Manager Product Marketing, sit down with Micron's Chips Out Loud to discuss the topic of Compute Express Link. CXL is the premiere open standard for high-speed CPU connection to device and memory in high-performance data centers and will usher in a new age of composability within data centers, making them more efficient and more flexible.

Listen to podcast >

+

How CXL will aid in AI and Large Language Models (LLMs) in solving new problems

Ryan Baxter Sr. Director, Product Management, Micron sits down with Patrick Moorhead and Daniel Newman of Six Five Insider Edition to discuss memory expansion using CXL, its role in AI, and next steps in development and deployment.
Watch the video >
+

How CXL will help overcome memory bandwidth challenges in the data center

Patrick Moorhead, Moor Insights and Strategy, sits down with Ryan Baxter to discuss the memory sharing benefits of CXL in the data center.
Watch the video >
+

Micron memory expansion for data-intensive workloads

See how Micron memory expansion modules supporting CXL, addresses system memory bottlenecks by delivering memory capacity and bandwidth expansion for emerging data-intensive applications and workloads.
Watch the video >
+

Micron enabling the next-generation scalable and flexible data center using CXL

Data centers are becoming more complex with increasing workload demands. Micron is shaping the future of the data center using CXL to provide flexible and scalable memory sharing and data center memory expansion.
Watch the video >
+

Frequently asked questions

What is CXL?

CXL (Compute Express Link) is a high-speed interconnect, industry-standard interface for communications between processors, accelerators, memory, storage, and other IO devices.

CXL increases efficiency by allowing composability, scalability, and flexibility for heterogeneous and distributed compute architectures. CXL allows applications to share memory among CPU, GPU and FPGA devices which enables sustainability leading to accelerated compute.

What are the three types of CXL devices?

Type 1 (CXL.io) CXL device

This protocol is used for device initialization, link-up, enumeration and device discovery. It is used for devices like FPGAs and IPUs that support CXL.io. Type 1 devices implement a fully coherent cache but no host-managed device memory.

Type 2 (CXL.cache) CXL device

This protocol implements an optional coherent cache and host-managed device memory. Typical applications are devices that have high-bandwidth memory attached.

Type 3 (CXL.mem) CXL device

This protocol is used only for host-managed device memory. Typical applications are as memory expanders for the host.

What is the main advantage of CXL?

The key advantage of CXL is the expansion of the memory for compute nodes, filling the gap for data-intensive applications that require high bandwidth, high capacity and low latency.

What is Micron’s perspective on CXL?

Modern compute architectures are prone to the “memory wall” problem. CXL provides the necessary architecture to bring balance to the compute and memory scaling gap. It creates a new vector to achieve economically viable memory solutions through memory expansion, impacting DRAM bit growth rate.

Additionally, CXL’s flexible and scalable architecture provides higher utilization and operational efficiency of compute and memory resources to scale-up or scale-out resources based on workload demands.

To learn more on Micron’s perspective on the impact of CXL on DRAM bit growth rate, read our white paper.

What is the memory wall problem and how does CXL help?

Modern parallel computer architectures are prone to system bottlenecks that limit performance for application processing. Historically, this has been known as the “memory wall”, where the rate of improvement in microprocessor performance far exceeds the rate of improvement in DRAM memory speed.

CXL protocol properties for memory-device cohesion and coherency address the memory wall by enabling memory expansion beyond server DIMM slots. CXL memory expansion serves as a two-prong approach by adding bandwidth to overcome the memory wall as well as adding capacity for data-intensive workloads for CXL-enabled servers.

What is Micron’s perspective on the impact of CXL on DRAM bit growth rate?

CXL attached memory provides tremendous opportunity for growth in new areas for tiered memory storage and enabling memory scaling independent of CPU cores. CXL will help sustain a higher rate of DRAM bit growth, but don’t expect CXL to cause an acceleration in DRAM bit growth. Overall, it’s a net positive for DRAM growth.

What will Micron’s commitment to CXL technology enable for customers and suppliers?

Micron’s commitment to CXL technology enables customers and suppliers to drive the ecosystem for memory innovation solutions. To learn more on how Micron is enabling next-generation data center innovation on our data center solution page.

How will CXL architecture change the data center?

CXL is a cost-effective, flexible, and scalable architectural solution that will shape the data center of the future. It will change how traditional rack and stack architecture of servers and fabric switches are deployed in the data center.

Purpose-built servers that have dedicated fixed resources comprised of CPU, memory, network and storage components will give way to these more flexible and scalable architectures. Servers in the rack – once interconnected to fixed resources for network, storage and compute – will be dynamically composed to meet the demands of modern and emerging workloads such as AI and deep learning. Eventually, the data center will migrate towards complete disaggregation of all server elements, including compute, memory, network, and storage.

+