All Tech Note

Going Beyond the Four Corners of Data Storage

When architecting an SSD storage system, performance is an important consideration, especially when the media cost makes up a substantial portion of the total system cost. None of us wants to choose poorly.

The traditional and easy way to start evaluating SSD performance is to look at the ‘four-corners’ of sequential and random reads and writes. Testing a drive with 100% of each kind of workload gives you a rough idea of the field of performance you can get from the drive.

But this approach is obviously artificial; real-world workloads don’t behave this way, and no one seriously believes they do. Yet we have to start somewhere, and theoretical limits do provide us with that starting point.

Go Further

The need to go further starts almost immediately with choices about block size and queue depth. Do you test queue depths of 128 or 32? Is 64k the right choice for block size, or is it 8MB? We could test a range of options to try to get a picture of what we can expect under different conditions, but what does this really tell us?

The goal of testing should be to find drives that will suit the use case to which they’ll be put. Simple testing based on simple workloads at the four corners might give us a general feel for how a drive performs, but for important systems, shouldn’t we be expected to go further? Shouldn’t we at least try to determine the kind of workload profile our storage system will be expected to support, and then find SSDs to match?

Solidigm believes we should, which is why it has built deep technical relationships with customers and storage system solution providers to better understand what is being demanded of their drives. Solidigm uses that knowledge to tune the firmware on its drives to maximize performance for real-world workloads, not benchmarks. That tuning helps its drives outperform competitors’ under real-world conditions.

Be Smarter

In its work with customers, Solidigm has discovered that most workloads operate at low queue depths, run in multi-tenant environments, have a mix of reads and writes, and that there is nearly always some degree of write pressure, even for the most read-intensive workloads.

For example, Content Delivery Networks (CDNs) are up to 95% read, while databases 70% read to 30% write. Even with very read-intensive workloads, there is still background write pressure for garbage collection, TRIM operations, and the like. Maintaining high and consistent read responsiveness under these conditions requires an understanding of the complexities of how real systems actually work.

That understanding allows Solidigm to make intelligent choices about how to tune its drives for different situations to provide superior and consistent performance under real-world conditions. Solidigm has designed its drives at the NAND and firmware level to support this tuning so that its drives can adapt to the demands made of them.

And thanks to Solidigm’s deep technical engagements with customers that include the world’s largest clouds, we can all benefit from what Solidigm has learned. While we may not all be building public clouds, we know that there really aren’t many truly unique workloads. Why not take advantage of what others have learned?

Conclusion

There’s nothing wrong with using the four corners as a starting point for performance analysis, but it would be irresponsible to stop there. It’s not unreasonable for those depending on us to make good architectural choices to expect that we will dig a little deeper.

They’ve placed their trust in us, so why not strive to be the best we can be?

About the author

Justin Warren

Justin is Chief Analyst and managing director of PivotNine, a firm that advises vendors on positioning and marketing, and customers on how to evaluate and use technology to solve business problems. He is a regular contributor at Forbes.com, CRN Australia, and iTNews.

Leave a Comment