With a tacit acceptance of the decline in Moore’s Law, businesses and their IT organizations need to find new ways to process and manage the huge amounts of data being created by an increasingly distributed infrastructure. Disaggregated computing is one solution proposed by start-ups like Pliops, which are looking to evolve the traditional model of compute into new and diversified architectures.
How Did We Get Here?
Centralized data processing has seen incredible growth over the last 30 years. The Intel architecture now dominates the data center. However, the physical challenges experienced in scaling CPU performance have resulted in vendors developing solutions that scale out rather than up. Modern processors have increased performance through the addition of CPU cores — essentially multiple processors within a single package.
The introduction of flash media into the data center has resulted in enormous improvements in I/O throughput. Storage was once the server bottleneck, however, modern NVMe SSDs can deliver hundreds of thousands of IOPS at latencies measured in microseconds. SSDs are orders of magnitude faster than legacy hard drives and, on top of that, improving at a faster rate than general CPUs.
In parallel with the improvements in media, computing, and storage, in particular, have moved away from custom hardware. Instead, features that include encryption, compression, and other computationally intensive activities are routinely delivered through the processor cores available to applications. At first glance, this may seem like a smart move, however, as data growth continues to explode, there are not enough cores to keep up, driving even more expensive scale out. This makes the use of processor cores for storage activities an expensive luxury.
As we look further up the stack, we see that the I/O demands of modern applications don’t always align with the capabilities of storage media. NAND flash has a limited endurance or write lifetime. With certain types of workload, the I/O profile has the effect of creating write amplification, or multiple physical writes to media to store one logical piece of data. While SSD manufacturers can mitigate some of these issues, the evolution towards QLC and potentially PLC NAND media will result in further pressure to manage endurance and the effects of write amplification.
Offload and Accelerate
Disaggregation is one solution put forward to deal with the issues we’ve discussed. In this model, complex storage tasks are offloaded to a dedicated storage processor, rather than being handled by the general CPU at the heart of the server.
This additional hardware is responsible for mitigating the challenges of storage media and delivers the additional benefits of saving precious general CPU cores for application-specific tasks.
Pliops Storage Processor
In a recent video interview, I discussed the Pliops technology with President and CBO, Steve Fingerhut.
Pliops has developed the storage processor, a hardware-based storage accelerator that offloads and accelerates data-intensive tasks. Steve explained how the solution works in two specific scenarios:
Flash Storage Optimisation
The Pliops Storage Processor lives in the data path between the application and SSD storage, optimizing the way data is processed, and SSD storage is managed. This improves performance, resiliency, and the endurance of the hardware. At the host layer, the Pliops Storage Processor is accessible simply via a block interface and appears as an NVMe block device, so it can be implemented without any host-level modifications.
Storage Engine Offload
The second scenario has been developed specifically to mitigate the challenges introduced by database storage engines. Modern storage engines such as InnoDB, RocksDB, and Wired Tiger will optimize I/O performance at the expense of write amplification. The Pliops Storage Processor offloads the key/value I/O functionality used by storage engines via popular software APIs, resulting in supercharged performance and extended media endurance.
Steve goes on to highlight how the Pliops Storage Processor delivers savings in several ways:
- Capacity: Pliops’ in-line compression and space-efficient disk fail protection improves on-disk efficiency up to 6X.
- Increased Endurance: SSDs last longer because data is written in a NAND-friendly way.
- Hardware Savings: Increased performance results in greater CPU efficiency and the ability to deploy smaller hardware footprints.
- Compute Savings: CPU cores in existing hardware can be more usefully employed running application code and processing more useful work.
The race to deliver faster storage has started. In the future, we can expect all data-intensive applications to be accelerated in some form. Pliops has developed an elegant solution that inserts into existing applications and server hardware, making the transition to disaggregated architectures a simple and painless process. Learn more about Pliops by watching the full video interview above, visiting their website, or watching their most recent Storage Field Day appearance.