I’m not a fan of making press releases on behalf of other companies however once in a while, a news item catches my interest. So it is with the announcement of the Violin Memory Inc. 3200 series of all-memory storage arrays. Why are these interesting? Because I think they are moving and potentially blurring the boundaries between spinning drives and memory-based permanent data storage.
Building arrays from pure memory isn’t new; Texas Memory Systems have had the RamSan series of products on the market for some time now (and there are others out there). Of course, the problem for many large organisations is how to make use of such an expensive and relatively small device. There are plenty of use cases where flash/SSD may be useful, however (a) it is difficult to target exactly which applications and (b) for those applications that can be identified, potentially only part of the data will benefit from acceleration.
One solution has been to follow the route of the traditional vendors and add SSD as an extra device within the same hardware chassis. This isn’t a solution to using SSD but rather a sticking plaster over the problem; the SSD may give better read performance but it is unlikely that writes will be accelerated to the level justified by the additional costs of the SSD device itself. In addition, the SSD is sitting behind a traditional storage array. Vendors such as EMC, IBM and Hitachi have spent millions of man-hours and hundreds of millions of dollars on software developments to help smooth the impact and manage the unpredictable performance of hard drives. Remember that when an I/O request is received, the storage array has no idea where a mechanical device like a hard drive is positioned and so cache, algorithms and that other clever intellectual property have been used to mask these physical inadequacies.
However, despite vendors’ best efforts, spikes and unpredictable response times do occur and there’s no way to remove them and guarantee completely consistent I/O responses.
The Violin Approach
So what happens if you can remove the cost issues and buy an SSD-based array for the same price as tier 1 storage? This is the route Violin Memory are taking to market — make the SSD storage array as closely priced to tier 1 arrays as possible. Remove the thought process and complications of determining what to place on SSD by making the price argument irrelevant.
In reality, Violin haven’t reached that price parity yet; prices are quoted around the $20/GB mark, which is around double what I’d expect to see for tier 1 storage (depending on volume). However it is in the order of magnitude where organisations can look at those troublesome applications that decide that the cost of additional servers, disk spindles or re-writing the application is outweighed by simply moving the application to a Violin SSD device.
I think this is the ultimate tipping point for SSD use; where the cost of improving application performance is exceeded by the cost of moving to SSD, then SSD will win. Where improving application performance is justified by increased business advantage, the business case is written.
OK, let’s have a look at the technical specifications for the techies amongst you. Firstly, today’s device capacity sits at 10TB in 3U and is expected to grow to 20TB in Q3. I’ve also been told that this capacity is expected to be close to 5x greater by the end of 2010, which means 100TB of memory-based storage in a 3U unit.
The 3200 supports PCIe (x4 & x8) as well as 4/8Gb Fibre Channel and 10Gb iSCSI and FCoE. Latency is less than 100 microseconds.
Violin array use VIMMs (Violin’s name for their flash memory cards. These are grouped together into 1TB units, using RAID-5 technology to manage failures. Maintenance can be performed online periodically to replace failed VIMM devices.
There’s one major issue with Flash/memory-based arrays that Violin claim to have addressed. That is the issue of degraded performance over time. Have a look at the following graphic, showing saturated workload on the Crucial C300 versus X25M from Intel. This graph and the associated review can be found on Anandtech’s website here. Very quickly with heavy use, the performance for these devices drops off. Violin claim their array doesn’t suffer similar issues and can deliver sustained performance. Of course, we can believe that statement once we’ve seen a review of the product delivering the performance as promised.
A 10/20TB capacity in 3U isn’t huge by today’s standards. If Violin Memory can deliver on their promises and bring a 3 to 5-fold increase in performance by year end (with a continual reduction in price) then things start to look interesting. I’d like to see the results of some long-term stress tests on the 3200 series devices. I have some more material to post in the coming days, once I can validate what’s open and not under NDA/embarbgo. In the meantime, here are some questions to ponder:
- Do I have any I/O bound applications?
- Can I measure/determine my I/O bound applications?
- Is there direct businss advantage from increasing I/O throughput?
If you can start answering yes to the above questions, then perhaps SSD-based arrays are for you.
- End-to-End Data Management - April 27, 2015
- Learn to Love The Data Not The Hardware - January 5, 2015
- Evolving Storage From Pets To Cattle - December 15, 2014
- 3Par Acquisition: The Future For The Storage Industry - September 1, 2010
- Data ONTAP 8.0 — Part III - August 10, 2010
- Four Pillars — Service: More On Chargeback - June 7, 2010
- Hardware Review: Drobo Elite — Part I - June 3, 2010
- Four Pillars — Service: Chargeback - May 31, 2010
- Violin Memory Inc Release New All-SSD Array - May 27, 2010
- Four Pillars — Service: The Service Catalogue - May 24, 2010