Kove:SDM™ vs. CXL: How Software-Defined Memory Is Delivering Future-Ready Infrastructure While Hardware Still Waits
Kove Ideas
Compare Kove’s Software-Defined Memory (SDM) and CXL: Learn which delivers real-time memory performance, scale, and savings today — and which is still years away.
TABLE OF CONTENTS
Introduction to the Growing Importance of Memory Management Technologies
A Brief Introduction to Software-Defined Memory (SDM) and Compute Express Link (CXL)
What is Software-Defined Memory (SDM) and How Does it Work?
What Is Compute Express Link (CXL) and How Does it Work?
What Are the Differences Between Software-Defined Memory (SDM) and Compute Express Link (CXL)?
Introduction to the Growing Importance of Memory Management Technologies
The larger datasets used in generative AI, proprietary enterprise applications, and other intensive jobs are requiring more memory than ever.
As a result, technologists are often forced to purchase additional memory off-schedule just to continue meeting their constantly growing capacity needs. Regularly buying more (and ever larger) servers is expensive and takes financial resources away from other pressing needs.
Meanwhile, many servers are being underutilized because memory stranding is so common — even inevitable.
Data center design has successfully addressed resource sharing and stranding issues by virtualizing and pooling resources. With recent advances in technology, memory now joins the pantheon of other crucial virtualization strategies, including storage, computation, and networking. As with all virtualization technologies, it is critical to consider strengths, weaknesses, and benefits of software versus hardware virtualization approaches.
Two approaches for overcoming the memory limitations, challenges, and the need for more memory in general, have come to the fore: Software-Defined Memory (SDM) and Compute Express Link (CXL). Both have been in development for many years, but only SDM is commercially proven, easily implemented, and does not require a substantial new investment into new hardware. The other, CXL, currently has some components becoming available, but the full realization of its goals is still years away.
A brief introduction to Software-Defined Memory (SDM) and Compute Express Link (CXL)
What is the difference between SDM and CXL? SDM is a software approach to memory pooling, whereas CXL is a hardware approach. Both address the call for increasing memory capacity — and, in fact, both can work together to provide memory gains when compared against traditional servers. But as with other types of pooled resources, software approaches radically increase flexibility. For example, CXL requires specialized memory modules, new hardware, new chips, along with at least Gen 4 PCI Express. In contrast, as software Kove:SDM™ seamlessly runs on any type of system, memory, or PCI generation supported by Linux.
What Is Software-Defined Memory (SDM) and How Does it Work?
According to Manoj Wadekar in a paper published by SNIA (i.e., the Storage Networking Industry Association), “Software-defined memory (SDM) is an emerging architecture paradigm that provides software abstraction between applications and underlying memory resources with dynamic memory provisioning to achieve the desired SLA.”
In other words, SDM is the management of virtualized server memory. The other software-defined resources — computing, networking, storage — have been commercially available for quite some time.
But achieving a workable virtualized, software-defined memory solution has long been elusive, even though major players have invested billions into trying to develop this.
The first (and only) commercially available software-defined memory solution is Kove:SDM™. It empowers individual servers to draw from a common memory pool from anywhere in the data center, including amounts far larger than could be contained within a physical server. Just as SANs enable processing databases larger than can fit on a single server’s disk, SDM enables processing larger workloads in memory than can fit on a single server. Indeed, SDM delivers equivalent and sometimes even faster than local memory performance, never needing swap, or using hypervisors or additional NUMA nodes.
As a result, technology leaders can enjoy convenient 3–5x processing improvements — with more than 200x sometimes seen — without changing code. Users will finally receive the memory size and performance they need to analyze any size data set, or perform any computational need, when and where they need it on virtually any hardware.
Based on the features and documented benefits of Kove:SDM™, here’s what you should know about this breakthrough technology:
Key Features of Kove’s Software-Defined Memory (SDM)
Commercial Off-the-Shelf (COTS) Hardware
Unlike CXL, technologists can use Kove:SDM™ on any COTS hardware. Kove:SDM™ utilizes Ethernet for the Control Plane (i.e., command-and-control) and InfiniBand or RoCE for the Data Plane (i.e., memory data transfer). What’s more, Photonics, Slingshot, and other RDMA “memory-interconnects” will be supported as they become generally available. Finally, the command-and-control runs on Red Hat, VMware, and other popular virtualization platforms.
Memory Pooling
With SDM, memory is decoupled or “disaggregated” from standard servers and pooled into a global resource that is shareable and reusable across the data center. Memory stranding and starving are thus eliminated, so users can markedly increase utilization.
Dynamic Allocation
Any size computation can run completely in memory through the unlimited dynamic memory sizing that SDM enables. The server simply requests and receives memory from the pool, structured by customer-defined policies that govern access.
Flexibility
SDM gives technologists the flexibility to rapidly create any size server or set of servers required to meet their immediate needs. If requirements change, in milliseconds SDM policies enable dynamic allocation of memory to any size, small or large, to meet the updated demands — even when memory resources span an entire data center. SDM also offers the flexibility of supporting clustered, containerized, physical, or virtual computing while reducing overall hardware and energy costs.
The Key Benefits of Software-Defined Memory
- Design Efficiencies
Kove:SDM™ neither uses hypervisors nor additional NUMA nodes. Kove:SDM™ design delivers equivalent and sometimes even faster performance than local server processing. - Improved Performance
A global financial customer of Kove:SDM™ running Red Hat® OpenShift® demonstrated that software-defined memory boosted a workload’s performance by up to 60x as compared to the leading virtual machine approach. - Enhanced Resource Utilization
SDM eliminates memory stranding, enabling you to use fewer servers to achieve the same level of work. Turn 30% memory utilization into 90%, and get up to 200% return on investment by reducing your total cost of ownership and extending your infrastructure’s lifespan. - Greater Flexibility
SDM scales memory on-demand, statistically matching CPU needs for local memory, including amounts exceeding physical server capacity. - Improved Agility
SDM enables you to use more of the hardware you already own in ways you never imagined. Now, challenges that an organization might have considered unachievable can be quickly addressed. - Scale-up, Scale-out, or Both
Sharding technologies have made scaling possible simply by buying more licenses and servers. SDM enables scaling shards “up” — so you can scale your application without adding (or needing) additional shards and concomitant licensing. Of course, if you want to shard you can also do that seamlessly with SDM. Technologists can scale up, scale out, or both, depending on the needs for the job. - Greater Availability
With Kove:SDM™ your application is resilient towards memory failures. Instead of bringing down a server to change a DIMM when a DIMM fails, SDM provides a new virtualized allocation in a few hundred milliseconds, avoiding service disruption. - Simplified Management
Kove:SDM™ command-and-control runs on Red Hat, VMware, and other common virtualization platforms. Once policies are configured, the system can provision the memory automatically, including manual local control by the data scientist. Policy-governed management enables organizational flexibility, control, and enforced capacity and utilization rates. - Enhanced Security
After use, the allocated memory is zeroed out and returned to the pool for later reuse. This naturally supports multitenancy. SDM supports native hardware optimizations and encryption. - Environmentally Friendly
Through the greater efficiency of pooling memory and the ability to shrink the footprint, SDM can reduce power, heat, and cooling needs by as much as 54% and CO2 by as much as 52%. - Cost-Savings
Less memory and CPU investments will be needed as well as less power and cooling. In addition, footprint can reduce by up to 33%. Finally, enhanced resource utilization can eliminate the practice of purchasing memory off-cycle to keep up with the ever-increasing memory demands.
Select Software-Defined Memory Applications
Deployment Example: SWIFT Modernizes Financial Services Infrastructure With Software-Defined Memory
If an industry or business requires more than one server or experiences expensive memory stranding, then Kove:SDM™ will benefit the solution. Kove:SDM™ runs on any standard hardware today or tomorrow, and seamlessly on CXL hardware if/when it becomes available.
To understand the full power of using Kove:SDM™ right now, let’s look at how SDM is being employed by SWIFT, the backbone of the global financial industry. SWIFT is an organization that serves over 11,000 banking and securities organizations, as well as corporate customers in more than 200 countries and territories.
To modernize its infrastructure and scale memory-intensive workloads — such as risk modeling and real-time fraud detection — SWIFT implemented Kove:SDM™ alongside Red Hat® OpenShift®. This enabled the organization to:
- Dynamically scale memory across development, staging, and production without rewriting applications.
- Support up to 64 Petabytes (PiB) of pooled memory per process, eliminating traditional server memory constraints.
- Run AI/ML workloads with real-time responsiveness, ensuring operational agility in security-critical environments.
- Maximize existing infrastructure and avoid costly hardware refresh cycles.
This deployment highlights how software-defined memory isn’t theoretical. It’s already enabling global-scale financial systems to operate with greater resilience, speed, and efficiency — for mission-critical global outcomes.
If it can work where privacy, security, performance, global criticality, and truly immense processing capabilities are success requirements, ensuring over 42 million transactions worth over $5 trillion USD per day are frictionless, instantaneous, and secure, then it can certainly work for the needs of your organization as well.
Read more: The Ultimate Guide to Software Memory
What Is Compute Express Link (CXL) and How Does it Work?
Formed in 2019, the CXL Consortium is an open-standards initiative organized by technology companies including AMD, Cisco, Dell/EMC, Google, IBM, Intel, Samsung, and approximately 180 other members. CXL establishes coherent interconnect technology based on PCIe. The CXL specification is on its third generation, and is backward compatible with CXL 1.0, CXL 1.1, CXL 2.0, as well as with CXL 3.0 using Gen 6.0 PCIe signaling. CXL is expected to evolve in parallel to PCIe.
The design targets high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. I/O devices can include Type 1, 2, and 3 devices, plus co-processors and memory. It offers memory coherent connectivity between the processor and accelerators or memory.
Delivered to the marketplace, CXL is designed to provide an industry standard for improved connectivity, better performance and efficiency, greater capacity and bandwidth, and cache-coherent memory sharing between CPUs, GPUs, TPUs, and other processors. Effectively realized, CXL promises to support AI/ML and other intensive applications, as well as enable IT to more easily add memory to a CPU host processor. Like what Software-Defined Memory (SDM) delivers today, one day in the future CXL aims to eliminate stranded memory.
Should the CXL vision deliver its promises one day, professionals should be able to “focus on target workloads as opposed to the redundant memory management hardware in their accelerators.” With a CXL approach, professionals, programmers, data scientists, developers, IT houses, cloud, data centers, and even the edge one day would be defined by workload and software needs, not memory hardware constraints.
Based on the goals and specifications set by the CXL Consortium, here’s what you should know about this promised technology should it become fully available:
Key Features Compute Express Link (CXL) Hopes to Achieve Include:
Hardware-Based Memory Coherency
CXL will support limited — i.e., within server — memory pooling by ensuring coherence between the memory on the CPU and the memory on other attached devices. The operating system (OS) typically treats the added CXL memory as a second tier. Native DRAM is “near” memory, and CXL memory is “far” memory. This hardware approach requires specialized logic to be reimplemented for every operating system and inside every hypervisor. This approach faces performance penalties, navigating “near” and “far” memory definitions, as well as hardware overhauls and potential software programming challenges.
High-Speed Interconnect
The goal is to achieve high-bandwidth, low-latency connectivity. In fact, the protocols for CXL 3.0 call for a max link rate of 64GTs and a transfer rate of up to 121 GB/s for a x16 device. CXL’s approach improves line-rate performance, but is subject to physics latency imposed by the speed of light traveling the distance of cable.
In-Rack Memory Pooling
CXL enables shared memory for solving large problems by building groups of in-rack hardware. Prior to CXL 3.0, in-rack memory pooling will not be available. This also is subject to physics: the latency of light traveling across cabling distance.
Unified Interface
As an organization of more than 180 technology companies, the hope is to bring open-source standards to the industry, and aspirations for improved hardware interoperability across vendors.
Security
CXL enables memory encryption to protect data in transit and trusted execution environments to host confidential workloads. However, the data can remain in the cache (or in multiple caches due to its cache coherency).
The Potential Benefits of Compute Express Link (CXL)
Performance
CXL promises to offer faster data transfer speeds than current hardware solutions. The CXL 3.0 specification supports a 64GT/s transfer rate, while maintaining CXL 1.0, CXL 1.1, and CXL 2.0 latency.
Scalability
The CXL Consortium’s goal is to enable the creation of server hardware that ensures coherence capabilities while enhancing memory pooling, multi-level switching, and software capabilities. Additionally, technology professionals will still need to maintain spare capacity with CXL if they want to meet peak demands, but the amount of spare capacity can be smaller than current hardware solutions.
Compatibility
To use CXL, technologists will need to purchase all-new hardware. Once the promised CXL products are on the market and installed, there should theoretically be compatibility and unity between hardware suppliers. This deleteriously influences data center management with incompatible hardware across technology and hardware generations, especially expanding an organization’s labor and maintainability costs.
Efficiency
CXL makes as-yet-untested promises of being more efficient than what can be achieved with today’s hardware infrastructure without the use of SDM. For instance, it will enable “direct peer access to HDM memory without going through host” and other enhancements.
Anticipated Applications for CXL
AI/ML
CXL makes unsupported promises to eventually enable quicker, more efficient processing than existing hardware processing by interconnecting “nodes” that can engage with each other and CPUs to conduct complex computations by utilizing specialized resources.
Data Centers
Organizations will need to replace hardware and implement new “rack-level architectures” to integrate CXL. However, should CXL come to fruition and also reach its goals, data centers would potentially be able to achieve long-term cost reductions and improved application performance.
GPU-Accelerated Computing
While CXL enables GPU-accelerated computing, Nvidia is not jumping on board the CXL wagon. As reported by Dylan Patel and Jeremie Eliahou Ontiveros in an article entitled “CXL Is Dead in The AI Era” (and rereported by Chris Mellor in Blocks and Files), Nvidia is limiting the number of PCIe lanes to optimize bandwidth for faster processing. They even predict that the chip area for the PCIe interconnect may even get smaller in the future.
High-Performance Computing (HPC)
CXL provides a vision for multi-tiering memory that could be friendly for HPC. This will involve different hardware, new programming models, and architecture designs. CXL targets rack-level performance with differing performance capability across memory tiers, with still greater degradation across the data center distances. SDM has no such restrictions, delivering local memory performance, whether remote memory resides across a rack or across the data center. SDM incurs latency penalties comparable to local memory performance, regardless.
Current Status and Timeline for Market Availability
- Tobias Mann wrote in in the technology publication, The Next Platform, in 2022, that “While it is true the CXL will allow for full disaggregated systems where resources can be shared throughout the rack over a high-speed fabric, those days are still a few years off.”
- By all indications, what was true in 2022 is still true today in 2025, particularly for a complete commercialization and implementation of the recently released CXL 3.0 Standards. In fact, there seems to be a repeating CXL promise of imminent availability. In early 2024, for instance, it was predicted that CXL hardware would start becoming widely adopted toward the end of that year. Currently, it is being predicted that CXL hardware will be available at some point in 2025.
- In other words, CXL may help solve some problems in the future. But commercially available today, SDM already delivers CXL promises, as well as features, performance, and benefits CXL never anticipates. SDM future proofs adoption of all publicly available hardware approaches to memory virtualization, including, but not limited to, CXL. Customers adopting CXL agree to forklift upgrades and inflexibility to software options. Customers adopting SDM will achieve functionality and features beyond CXL’s roadmap, with the protection to work with standard hardware, including CXL as it becomes available. Working on all x86 hardware, it’s easier and cheaper to get started with SDM than CXL even proposes for the future.
What Are the Differences Between Software-Defined Memory (SDM) and Compute Express Link (CXL)?
Which technology should you choose for your business? Here’s a brief chart to help you compare them:
Software-Defined Memory (SDM)
Compute Express Link (CXL)
Approach
Software-Based Solution
Hardware-Based Solution
Architecture
Flexible and dynamic memory management system, with like-local and sometimes faster than local memory performance even when serving memory across a data center, on any x86 hardware, standard networking, and all existing programming models.
In-rack distance, high-speed, low-latency interconnect, delivering performance slower than local memory, with architecture changes necessary to use technology, including new servers, new protocols, and possibly new programming models.
Deployment Scale
In-rack or across a data center.
In-server and in-rack, but not across the data center.
Integration and Compatibility
Easy integration with existing IT infrastructure. No hardware changes needed. Kove:SDM™ requires commodity InfiniBand or ROCE Ethernet.
Requires specific new hardware and scale-wide architectural changes.
Security
Zeroes out memory once the job has been completed, and supports native hardware encryption methods.
Support memory encryption to protect data in transit.
Transfer Performance
Same or faster than local memory.
64GT/s, subject to degrading latency.
Fully Cache Coherent
Yes
Yes.
Low-latency Memory per CPU
Approximates local DRAM performance, and sometimes faster than local DRAM performance, even when sharing memory across data center distances.
Approximates local DRAM performance even when inside the same server box.
Speed of Light, Distance of Cable
Not applicable. Performance is equivalent to and in some cases faster than local server performance, even when remote memory is 150 meters away.
Performance degrades in-server, and more so across-rack.
Global Memory Pool Access Across Nodes
Yes
Yes
Scalability
Disaggregated memory available across racks or whole data center, especially effective for cloud and HPC environments.
Node-bound; devices must be physically connected to a memory source.
Bandwidth
Scales linearly, achieving terabytes/second bandwidth, always with deterministic performance.
CXL 2.0 64 GB/s full duplex, with CXL 3.0 promising more.
Latency
Same or faster than local memory, even when serving memory across data center distances.
Subject to the speed of light, distance of cable performance loss, transfer jitter, ~100ns-1µs latency in-box.
Maturity
Established. Kove:SDM™ has been around for more than a decade and has been commercially validated and deployed in some of the world’s most rigorous contexts.
Emerging. CXL is partially available for very early adopters ready to upgrade some or all of their existing hardware.
Innovation
Generic software that improves all standard, existing, and arriving x86 hardware capabilities, plus arriving hardware innovations including CXL.
In-hardware architecture, aiming to redefine how memory and compute resources interact at the hardware level for next-generation computing environments.
TL;DR: SDM vs. CXL Quick Comparison:
- SDM is available now; works on all existing and arriving x86 hardware; is software-based and flexible; no new servers needed.
- CXL is not yet fully available; is hardware-based; requires new infrastructure and chips.
- SDM supports hardware memory encryption and also offers memory-zeroing for safe multi-tenancy applications. CXL has hardware encrypted memory and increased latency risks.
- SDM delivers local-memory-speed access across data centers; CXL is limited to rack-level for now.
What’s the Market Availability and Adoption of SDM and CXL?
Kove:SDM™ already powers critical infrastructure, with easily deployed design with which forward-thinking organizations can capture and create market share.
By adopting Kove:SDM™ today, technology professionals can immediately enjoy its benefits: 3–5x improvement in computing density, maximized hardware utilization, up to 50% reduced energy need, and 125x faster processing than SWAP. Customers achieve these benefits on existing, off-the-shelf hardware without opportunity headache or heartache.
Meanwhile, CXL is an emerging technology centered around goals and promises with a very limited availability of a relatively small array of CXL-enabled products. Because it’s just emerging, it is unclear if technology leaders would need to modify code to implement CXL and, if so, how much modification would be needed. Will CXL deliver its promise? In all cases, CXL design ensures lesser flexibility and performance than Kove:SDM™.
Conclusion
As mentioned earlier, technologists can enjoy SDM benefits right now, answering the call for greater server memory on your existing hardware, or wait an undetermined amount of time to experience CXL promises.
Meanwhile, as a software-based solution, Kove:SDM™ will work on the eventual CXL hardware, magnifying the benefits of both. In addition to providing features, benefits, and performance beyond CXL, SDM will support rather than collide with CXL promises as they arrive into the marketplace.
Finally, software evolution has historically yielded bigger gains than hardware developments in compute, storage, and networking. These same benefits now arrive with memory. While CXL will potentially address some market needs, bigger transformational gains will come from employing Kove:SDM™. Incorporating hardware innovations naturally, SDM fills out the pantheon of data center virtualization techniques: storage, compute, networking, and now memory.
So, why wait?
Install Kove:SDM™ today. Customers can use this first-of-its-kind technology both with and without CXL, making any investment in Kove memory technology a smart investment.
Find out more about Kove:SDM™ right now.
Frequently Asked Questions (FAQs)
Q: What is software-defined memory (SDM)?
A: Software-defined memory (SDM) is a virtualization technology that separates memory from the physical server, allowing memory to be pooled, shared, and dynamically allocated across systems. With Kove:SDM™, organizations can scale memory capacity far beyond local hardware limits — up to 6 PiB per process — without changing code or adding new servers.
Q: How is SDM different from CXL?
A: Kove:SDM™ is software-based and available now, enabling real-time memory access across servers. CXL (Compute Express Link) is a hardware interconnect standard that requires new CPUs, motherboards, and memory modules to function. While CXL offers lower-latency memory sharing at the rack level than previous hardware approaches, SDM enables virtualizing memory across the entire data center, while still delivering local, or faster than local memory performance.
Q: Is SDM commercially available?
A: Yes. Kove:SDM™ is a commercially available SDM solution that works with today’s and tomorrow’s x86 infrastructure and operating systems. Enterprises can implement SDM without waiting for next-generation hardware or rewriting applications.
Q: What are the benefits of SDM for AI, ML, and data-intensive workloads?
A: Kove:SDM™ removes memory bottlenecks that slow down AI/ML inference, training, and large in-memory datasets. By pooling memory across servers, SDM allows models to access terabytes of memory on-demand — without hitting performance walls. This leads to faster time to insight, higher model throughput, better resource utilization, and huge value to the bottom line.
Q: Can SDM and CXL work together?
A: Yes. Kove:SDM™ and CXL are complementary, not competitive. In the future, SDM can integrate CXL-connected memory pools into its virtualized fabric — offering even more flexibility. CXL adoption is still ramping up, and will typically perform less well than standard hardware running Kove:SDM™. SDM delivers immediate scale and ROI using the infrastructure organizations already own and use.
Q: Is CXL available now for enterprise deployment?
A: CXL is still in early-stage deployment. It requires CXL-compatible CPUs (like select Intel and AMD chips), new DIMMs, and operating system support. Full-stack enterprise adoption is expected to take several years. In contrast, SDM solutions like Kove:SDM™ are deployable today on standard Linux environments, using common hardware, in a few minutes.
Summary: What to Know About SDM vs. CXL
Kove:SDM™ is a commercially available software-defined memory platform that virtualizes and pools memory resources across the data center. Unlike CXL, which is a hardware-based interconnect still in development, SDM works on existing x86 infrastructure and supports memory access at local-latency speeds — even over long distances. SDM supports demanding AI/ML, edge, and financial workloads today, while CXL’s promise remains largely future-facing. SDM can complement CXL adoption when it becomes mainstream. But SDM delivers tangible ROI now while future proofing performance scaling, benefiting from all hardware innovation, including, but not limited to, CXL.